Project description:ImportanceEnrolling racially and ethnically diverse pediatric research participants is critical to ensuring equitable access to health advances and generalizability of research findings.ObjectivesTo examine the reporting of race and ethnicity for National Institutes of Health (NIH)-funded pediatric clinical trials and to assess the representation of pediatric participants from different racial and ethnic groups compared with distributions in the US population.Design, setting, and participantsThis cross-sectional study included NIH-funded pediatric (ages 0-17 years) trials with grant funding completed between January 1, 2017, and December 31, 2019, and trial results reported as of June 30, 2022.ExposuresNational Institutes of Health policies and guidance statements on the reporting of race and ethnicity of participants in NIH-funded clinical trials.Main outcomes and measuresThe main outcome was reporting of participant race and ethnicity for NIH-funded pediatric clinical trials in publications and ClinicalTrials.gov.ResultsThere were 363 NIH-funded pediatric trials included in the analysis. Reporting of race and ethnicity data was similar in publications and ClinicalTrials.gov, with 90.3% (167 of 185) of publications and 93.9% (77 of 82) of ClinicalTrial.gov reports providing data on race and/or ethnicity. Among the 160 publications reporting race, there were 43 different race classifications, with only 3 publications (1.9%) using the NIH-required categories. By contrast, in ClinicalTrials.gov, 61 reports (79.2%) provided participant race and ethnicity using the NIH-specified categories (P < .001). There was racially and ethnically diverse enrollment of pediatric participants, with overrepresentation of racial and ethnic minority groups compared with the US population.Conclusions and relevanceThis cross-sectional study of NIH-funded pediatric clinical trials found high rates of reporting of participant race and ethnicity, with diverse representation of trial participants. These findings suggest that the NIH is meeting its directive of ensuring diverse participant enrollment in the research it supports.
Project description:Policy has a tremendous potential to improve population health when informed by research evidence. Such evidence, however, typically plays a suboptimal role in policymaking processes. The field of policy dissemination and implementation research (policy D&I) exists to address this challenge. The purpose of this study was to: (1) determine the extent to which policy D&I was funded by the National Institutes of Health (NIH), (2) identify trends in NIH-funded policy D&I, and (3) describe characteristics of NIH-funded policy D&I projects.The NIH Research Portfolio Online Reporting Tool was used to identify all projects funded through D&I-focused funding announcements. We screened for policy D&I projects by searching project title, abstract, and term fields for mentions of "policy," "policies," "law," "legal," "legislation," "ordinance," "statute," "regulation," "regulatory," "code," or "rule." A project was classified as policy D&I if it explicitly proposed to conduct research about the content of a policy, the process through which it was developed, or outcomes it produced. A coding guide was iteratively developed, and all projects were independently coded by two researchers. ClinicalTrials.gov and PubMed were used to obtain additional project information and validate coding decisions. Descriptive statistics--stratified by funding mechanism, Institute, and project characteristics--were produced.Between 2007 and 2014, 146 projects were funded through the D&I funding announcements, 12 (8.2 %) of which were policy D&I. Policy D&I funding totaled $16,177,250, equivalent to 10.5 % of all funding through the D&I funding announcements. The proportion of funding for policy D&I projects ranged from 14.6 % in 2007 to 8.0 % in 2012. Policy D&I projects were primarily focused on policy outcomes (66.7 %), implementation (41.7 %), state-level policies (41.7 %), and policies within the USA (83.3 %). Tobacco (33.3 %) and cancer (25.0 %) control were the primary topics of focus. Many projects combined survey (58.3 %) and interview (33.3 %) methods with analysis of archival data sources.NIH has made an initial investment in policy D&I research, but the level of support has varied between Institutes. Policy D&I researchers have utilized a variety of designs, methods, and data sources to investigate the development processes, content, and outcomes of public and private policies.
Project description:PurposeTo measure diversity within the National Institutes of Health (NIH)-funded workforce. The authors use a relevant labor market perspective to more directly understand what the NIH can influence in terms of enhancing diversity through NIH policies.MethodUsing the relevant labor market (defined as persons with advanced degrees working as biomedical scientists in the United States) as the conceptual framework, and informed by accepted economic principles, the authors used the American Community Survey and NIH administrative data to calculate representation ratios of the NIH-funded biomedical workforce from 2008 to 2012 by race, ethnicity, sex, and citizenship status, and compared this against the pool of characteristic individuals in the potential labor market.ResultsIn general, the U.S. population during this time period was an inaccurate comparison group for measuring diversity of the NIH-funded scientific workforce. Measuring accurately, we found the representation of women and traditionally underrepresented groups in NIH-supported postdoc fellowships and traineeships and mentored career development programs was greater than their representation in the relevant labor market. The same analysis found these demographic groups are less represented in the NIH-funded independent investigator pool.ConclusionsAlthough these findings provided a picture of the current NIH-funded workforce and a foundation for understanding the federal role in developing, maintaining, and renewing diverse scientific human resources, further study is needed to identify whether junior- and early-stage investigators who are part of more diverse cohorts will naturally transition into independent NIH-funded investigators, or whether they will leave the workforce before achieving independent researcher status.
Project description:Advances in cancer treatments have led to nearly 17 million survivors in the US today. Cardiovascular complications attributed to cancer treatments are the leading cause of morbidity and mortality in cancer survivors. In response, NCI and NHLBI held 2 workshops and issued funding opportunities to strengthen research on cardiotoxicity. A representative portfolio of NIH grants categorizing basic, interventional, and observational projects is presented. Compared with anthracyclines, research on radiation therapy and newer treatments is underrepresented. Multidisciplinary collaborative research that considers the cardiotoxicity stage and optimizes the balance between cardiovascular risk and cancer-treatment benefit might support continued improvements in cancer outcomes.
Project description:BackgroundThe reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review.MethodsWe compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications.ResultsWe identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th-90th percentile range 5%-67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%-100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5-5.0). Primary outcomes differed between protocols and publications for 40% of the trials.InterpretationSelective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.
Project description:ImportanceDespite the rapid growth of interest and diversity in applications of artificial intelligence (AI) to biomedical research, there are limited objective ways to characterize the potential for use of AI in clinical practice.ObjectiveTo examine what types of medical AI have the greatest estimated translational impact (ie, ability to lead to development that has measurable value for human health) potential.Design, setting, and participantsIn this cohort study, research grants related to AI awarded between January 1, 1985, and December 31, 2020, were identified from a National Institutes of Health (NIH) award database. The text content for each award was entered into a Natural Language Processing (NLP) clustering algorithm. An NIH database was also used to extract citation data, including the number of citations and approximate potential to translate (APT) score for published articles associated with the granted awards to create proxies for translatability.ExposuresUnsupervised assignment of AI-related research awards to application topics using NLP.Main outcomes and measuresAnnualized citations per $1 million funding (ACOF) and average APT score for award-associated articles, grouped by application topic. The APT score is a machine-learning based metric created by the NIH Office of Portfolio Analysis that quantifies the likelihood of future citation by a clinical article.ResultsA total of 16 629 NIH awards related to AI were included in the analysis, and 75 applications of AI were identified. Total annual funding for AI grew from $17.4 million in 1985 to $1.43 billion in 2020. By average APT, interpersonal communication technologies (0.488; 95% CI, 0.472-0.504) and population genetics (0.463; 95% CI, 0.453-0.472) had the highest translatability; environmental health (ACOF, 1038) and applications focused on the electronic health record (ACOF, 489) also had high translatability. The category of applications related to biochemical analysis was found to have low translatability by both metrics (average APT, 0.393; 95% CI, 0.388-0.398; ACOF, 246).Conclusions and relevanceBased on this study's findings, data on grants from the NIH can apparently be used to identify and characterize medical applications of AI to understand changes in academic productivity, funding support, and potential for translational impact. This method may be extended to characterize other research domains.
Project description:The clinical trials community has a never-ending search for dependable and reliable ways to improve clinical research. This exploration has led to considerable interest in adaptive clinical trial designs, which provide the flexibility to adjust trial characteristics on the basis of data reviewed at interim stages. Statisticians and clinical investigators have proposed or implemented a wide variety of adaptations in clinical trials, but specific approaches have met with differing levels of support. Within industry, investigators are actively exploring the benefits and pitfalls associated with adaptive designs (ADs). For example, a Drug Information Association (DIA) working group on ADs has engaged regulatory agencies in discussions. Many researchers working on publicly funded clinical trials, however, are not yet fully engaged in this discussion. We organized the Scientific Advances in Adaptive Clinical Trial Designs Workshop to begin a conversation about using ADs in publicly funded research. Held in November of 2009, the 1½-day workshop brought together representatives from the National Institutes of Health (NIH), the Food and Drug Administration (FDA), the European Medicines Agency (EMA), the pharmaceutical industry, nonprofit foundations, the patient advocacy community, and academia. The workshop offered a forum for participants to address issues of ADs that arise at the planning, designing, and execution stages of clinical trials, and to hear the perspectives of influential members of the clinical trials community. The participants also set forth recommendations for guiding action to promote the appropriate use of ADs. These recommendations have since been presented, discussed, and vetted in a number of venues including the University of Pennsylvania Conference on Statistical Issues in Clinical Trials and the Society for Clinical Trials annual meeting.To provide a brief overview of ADs, describe the rationale behind conducting the workshop, and summarize the main recommendations that were produced as a result of this workshop.There is a growing interest in the use of adaptive clinical trial designs. However, a number of logistical barriers need to be addressed in order to obtain the potential advantages of an AD. Currently, the pharmaceutical industry is well ahead of academic trialists with respect to addressing these barriers. Academic trialists will need to address important issues such as education, infrastructure, modifications to existing funding models, and the impact on Data and Safety Monitoring Boards (DSMB) in order to achieve the possible benefits of adaptive clinical trial designs.