Project description:ImportanceThe benefits of responsible sharing of individual-participant data (IPD) from clinical studies are well recognized, but stakeholders often disagree on how to align those benefits with privacy risks, costs, and incentives for clinical trialists and sponsors. The International Committee of Medical Journal Editors (ICMJE) required a data sharing statement (DSS) from submissions reporting clinical trials effective July 1, 2018. The required DSSs provide a window into current data sharing rates, practices, and norms among trialists and sponsors.ObjectiveTo evaluate the implementation of the ICMJE DSS requirement in 3 leading medical journals: JAMA, Lancet, and New England Journal of Medicine (NEJM).Design, setting, and participantsThis is a cross-sectional study of clinical trial reports published as articles in JAMA, Lancet, and NEJM between July 1, 2018, and April 4, 2020. Articles not eligible for DSS, including observational studies and letters or correspondence, were excluded. A MEDLINE/PubMed search identified 487 eligible clinical trials in JAMA (112 trials), Lancet (147 trials), and NEJM (228 trials). Two reviewers evaluated each of the 487 articles independently.ExposurePublication of clinical trial reports in an ICMJE medical journal requiring a DSS.Main outcomes and measuresThe primary outcomes of the study were declared data availability and actual data availability in repositories. Other captured outcomes were data type, access, and conditions and reasons for data availability or unavailability. Associations with funding sources were examined.ResultsA total of 334 of 487 articles (68.6%; 95% CI, 64%-73%) declared data sharing, with nonindustry NIH-funded trials exhibiting the highest rates of declared data sharing (89%; 95% CI, 80%-98%) and industry-funded trials the lowest (61%; 95% CI, 54%-68%). However, only 2 IPD sets (0.6%; 95% CI, 0.0%-1.5%) were actually deidentified and publicly available as of April 10, 2020. The remaining were supposedly accessible via request to authors (143 of 334 articles [42.8%]), repository (89 of 334 articles [26.6%]), and company (78 of 334 articles [23.4%]). Among the 89 articles declaring that IPD would be stored in repositories, only 17 (19.1%) deposited data, mostly because of embargo and regulatory approval. Embargo was set in 47.3% of data-sharing articles (158 of 334), and in half of them the period exceeded 1 year or was unspecified.Conclusions and relevanceMost trials published in JAMA, Lancet, and NEJM after the implementation of the ICMJE policy declared their intent to make clinical data available. However, a wide gap between declared and actual data sharing exists. To improve transparency and data reuse, journals should promote the use of unique pointers to data set location and standardized choices for embargo periods and access requirements.
Project description:ObjectiveTo determine whether two specific criteria in Uniform Requirements for Manuscripts (URM) created by the International Committee of Medical Journal Editors (ICMJE)--namely, including the trial ID registration within manuscripts and timely registration of trials, are being followed.Materials and methodsObservational study using computerized analysis of publicly available Medline article data and clinical trial registry data. We analyzed a purposive set of five ICMJE founding journals looking at all trial articles published in those journals during 2010-2011, and data from the ClinicalTrials.gov (CTG) trial registry. We measured adherence to trial ID inclusion policy as the percentage of trial journal articles that contained a valid trial ID within the article (journal-based sample). Adherence to timely registration was measured as the percentage of trials that registered the trial before enrolling the first participant within a 60-day grace period. We also examined timely registration rates by year of all phase II and higher interventional trials in CTG (registry-based sample).ResultsTo determine trial ID inclusion, we analyzed 698 clinical trial articles in five journals. A total of 95.8% (661/690) of trial journal articles included the trial ID. In 88.3% the trial-article link is stored within a structured Medline field. To evaluate timely registration, we analyzed trials referenced by 451 articles from the selected five journals. A total of 60% (272/451) of articles were registered in a timely manner with an improving trend for trials initiated in later years (eg, 89% of trials that began in 2008 were registered in a timely manner). In the registry-based sample, the timely registration rates ranged from 56% for trials registered in 2006 to 72% for trials registered in 2011.DiscussionAdherence to URM requirements for registration and trial ID inclusion increases the utility of PubMed and links it in an important way to clinical trial repositories. This new integrated knowledge source can facilitate research prioritization, clinical guidelines creation, and precision medicine.ConclusionsThe five selected journals adhere well to the policy of mandatory trial registration and also outperform the registry in adherence to timely registration. ICMJE's URM policy represents a unique international mandate that may be providing a powerful incentive for sponsors and investigators to document clinical trials and trial result publications and thus fulfill important obligations to trial participants and society.
Project description:Although selective reporting of clinical trial results introduces bias into evidence based clinical decision making, publication bias in pediatric epilepsy is unknown today. Since there is a considerable ambiguity in the treatment of an important and common clinical problem, pediatric seizures, we assessed the public availability of results of phase 3 clinical trials that evaluated treatments of seizures in children and adolescents as a surrogate for the extent of publication bias in pediatric epilepsy.We determined the proportion of published and unpublished study results of phase 3 clinical trials that were registered as completed on ClinicalTrials.gov. We searched ClinicalTrials.gov, PubMed, and Google Scholar for publications and contacted principal investigators or sponsors. The analysis was performed according to STROBE criteria.Considering studies that were completed before 2014 (N = 99), 75 (76%) pediatric phase 3 clinical trials were published but 24 (24%) remained unpublished. The unpublished studies concealed evidence from 4,437 patients. Mean time-to-publication was 25 SD ± 15.6 months, more than twice as long as mandated.Ten years after the ICMJE's clinical trials registration initiative there is still a considerable amount of selective reporting and delay of publication that potentially distorts the body of evidence in the treatment of pediatric seizures.
Project description:Authorship represents a critical element of scientific research. This study evaluated the perceptions, attitudes, and practices of Jordanian researchers toward the International Committee of Medical Journal Editors (ICMJE) authorship criteria. An anonymous questionnaire was distributed to health sciences faculty ( n = 986), with 272 participants completing the questionnaire. Only 27.2% reported awareness of ICMJE guidelines, yet, 76.8% agreed that all ICMJE criteria must be met for authorship, and 55.9% believed that it is easy to apply the guidelines. Unethical authorship practices were reported by 16.5% to 31.3% of participants. A majority (73%) agreed that violation of authorship criteria is scientific misconduct. Well-defined criteria for authorship need to be disseminated and emphasized in less developed countries through training to avoid authorship disputes and unethical conduct.
Project description:Registration of clinical trials is critical for promoting transparency and integrity in medical research; however, trials must be registered in a prospective fashion to deter unaccounted protocol modifications or selection of alternate outcomes that may enhance favorability of reported findings. We assessed adherence to the International Committee of Medical Journal Editors' (ICMJE) prospective registration policy and identified the frequency of registrations occurring after potential observation of primary outcome data among trials published in the highest-impact journals associated with US professional medical societies. Additionally, we examined whether trials that are unregistered or registered after potential observation of primary outcome data were more likely to report favorable findings.We conducted a retrospective, cross-sectional analysis of the 50 most recently published clinical trials that reported primary results in each of the ten highest-impact US medical specialty society journals between 1 January 2010 and 31 December 2015. We used descriptive statistics to characterize the proportions of trials that were: registered; registered retrospectively; registered retrospectively potentially after initial ascertainment of primary outcomes; and reporting favorable findings, overall and stratified by journal and trial characteristics. Chi-squared analyses were performed to assess differences in registration by journal and trial characteristics.We reviewed 6869 original research reports published between 1 January 2010 and 31 December 2015 to identify a total of 486 trials across 472 publications. Of these 486 trials, 47 (10%) were unregistered. Among 439 registered trials, 340 (77%) were registered prospectively and 99 (23%) retrospectively. Sixty-seven (68%) of these 99 retrospectively registered trials, or 15% of all 439 registered trials, were registered after potential observation of primary outcome data ascertained among participants enrolled at inception. Industry-funded trials, those with enrollment sites in the US, as well as those assessing FDA-regulated interventions each had lower rates of retrospective registration. Unregistered trials were more likely to report favorable findings than were registered trials (89% vs. 64%; relative risk (RR)?=?1.38, 95% confidence interval (CI)?=?1.20-1.58; p?=?0.004), irrespective of registration timing.Adherence to the ICMJE's prospective registration policy remains sub-standard, even in the highest-impact journals associated with US professional medical societies. These journals frequently published unregistered trials and trials registered after potential observation of primary outcome data.
Project description:OBJECTIVE:Anonymised patient-level data from clinical research are increasingly recognised as a fundamental and valuable resource. It has value beyond the original research project and can help drive scientific research and innovations and improve patient care. To support responsible data sharing, we need to develop systems that work for all stakeholders. The members of the Independent Review Panel (IRP) for the data sharing platform Clinical Study Data Request (CSDR) describe here some summary metrics from the platform and challenge the research community on why the promised demand for data has not been observed. SUMMARY OF DATA:From 2014 to the end of January 2019, there were a total of 473 research proposals (RPs) submitted to CSDR. Of these, 364 met initial administrative and data availability checks, and the IRP approved 291. Of the 90 research teams that had completed their analyses by January 2018, 41 reported at least one resulting publication to CSDR. Less than half of the studies ever listed on CSDR have been requested. CONCLUSION:While acknowledging there are areas for improvement in speed of access and promotion of the platform, the total number of applications for access and the resulting publications have been low and challenge the sustainability of this model. What are the barriers for data contributors and secondary analysis researchers? If this model does not work for all, what needs to be changed? One thing is clear: that data access can realise new and unforeseen contributions to knowledge and improve patient health, but this will not be achieved unless we build sustainable models together that work for all.
Project description:Objective To estimate financial payments from industry to US journal editors.Design Retrospective observational study.Setting 52 influential (high impact factor for their specialty) US medical journals from 26 specialties and US Open Payments database, 2014.Participants 713 editors at the associate level and above identified from each journal's online masthead.Main outcome measures All general payments (eg, personal income) and research related payments from pharmaceutical and medical device manufacturers to eligible physicians in 2014. Percentages of editors receiving payments and the magnitude of such payments were compared across journals and by specialty. Journal websites were also reviewed to determine if conflict of interest policies for editors were readily accessible.Results Of 713 eligible editors, 361 (50.6%) received some (>$0) general payments in 2014, and 139 (19.5%) received research payments. The median general payment was $11 (£8; €9) (interquartile range $0-2923) and the median research payment was $0 ($0-0). The mean general payment was $28?136 (SD $415?045), and the mean research payment was $37?963 (SD $175?239). The highest median general payments were received by journal editors from endocrinology ($7207, $0-85?816), cardiology ($2664, $0-12?912), gastroenterology ($696, $0-20?002), rheumatology ($515, $0-14?280), and urology ($480, $90-669). For high impact general medicine journals, median payments were $0 ($0-14). A review of the 52 journal websites revealed that editor conflict of interest policies were readily accessible (ie, within five minutes) for 17/52 (32.7%) of journals.Conclusions Industry payments to journal editors are common and often large, particularly for certain subspecialties. Journals should consider the potential impact of such payments on public trust in published research.
Project description:Background: The work of journal editors is essential to producing high-quality literature, and editing can be a very rewarding career; however, the profession may not be immune to gender pay gaps found in many professions and industries, including academia and clinical medicine. Our study aimed to quantify remuneration for journal editors from core clinical journals, determine if a gender pay gap exists, and assess if there are remuneration differences across publishing models and journal characteristics. Methods: We completed an online survey of journal editors with substantial editing roles including section editors and editors-in-chief, identified from the Abridged Index Medicus "Core Clinical" journals in MEDLINE. We analyzed information on demographics, editing income, and journal characteristics using a multivariable partial proportional odds model for ordinal logistic regression. Results: There were 166 survey respondents (response rate of 9%), which represented editors from 69 of 111 journals (62%). A total of 140 fully completed surveys were analyzed (95 males and 45 females); 50 (36%) editors did not receive remuneration for editorial work. No gender pay gap and no difference in remuneration between editors who worked in subscription-based publishing vs. open access journals were detected. Editors who were not primarily health care providers were more likely to have higher editing incomes (adjusted odds ratio [OR] 2.96, 95% confidence interval [CI] 1.18-7.46). Editors who worked more than 10 hours per week editing earned more than those who worked 10 hours or less per week (adjusted OR 16.7, 95%CI 7.02-39.76). Conclusions: We were unable to detect a gender pay gap and a difference in remuneration between editors who worked in subscription-based publishing and those in open access journals. More than one third of editors surveyed from core clinical journals did not get remunerated for their editing work.
Project description:Peer review practices differ substantially between journals and disciplines. This study presents the results of a survey of 322 editors of journals in ecology, economics, medicine, physics and psychology. We found that 49% of the journals surveyed checked all manuscripts for plagiarism, that 61% allowed authors to recommend both for and against specific reviewers, and that less than 6% used a form of open peer review. Most journals did not have an official policy on altering reports from reviewers, but 91% of editors identified at least one situation in which it was appropriate for an editor to alter a report. Editors were also asked for their views on five issues related to publication ethics. A majority expressed support for co-reviewing, reviewers requesting access to data, reviewers recommending citations to their work, editors publishing in their own journals, and replication studies. Our results provide a window into what is largely an opaque aspect of the scientific process. We hope the findings will inform the debate about the role and transparency of peer review in scholarly publishing.
Project description:Data sharing is crucial to the advancement of science because it facilitates collaboration, transparency, reproducibility, criticism, and re-analysis. Publishers are well-positioned to promote sharing of research data by implementing data sharing policies. While there is an increasing trend toward requiring data sharing, not all journals mandate that data be shared at the time of publication. In this study, we extended previous work to analyze the data sharing policies of 447 journals across several scientific disciplines, including biology, clinical sciences, mathematics, physics, and social sciences. Our results showed that only a small percentage of journals require data sharing as a condition of publication, and that this varies across disciplines and Impact Factors. Both Impact Factors and discipline are associated with the presence of a data sharing policy. Our results suggest that journals with higher Impact Factors are more likely to have data sharing policies; use shared data in peer review; require deposit of specific data types into publicly available data banks; and refer to reproducibility as a rationale for sharing data. Biological science journals are more likely than social science and mathematics journals to require data sharing.