Project description:BackgroundVarious stakeholders are calling for increased availability of data and code from cancer research. However, it is unclear how commonly these products are shared, and what factors are associated with sharing. Our objective was to evaluate how frequently oncology researchers make data and code available and explore factors associated with sharing.MethodsA cross-sectional analysis of a random sample of 306 cancer-related articles indexed in PubMed in 2019 which studied research subjects with a cancer diagnosis was performed. All articles were independently screened for eligibility by two authors. Outcomes of interest included the prevalence of affirmative sharing declarations and the rate with which declarations connected to data complying with key FAIR principles (e.g. posted to a recognised repository, assigned an identifier, data license outlined, non-proprietary formatting). We also investigated associations between sharing rates and several journal characteristics (e.g. sharing policies, publication models), study characteristics (e.g. cancer rarity, study design), open science practices (e.g. pre-registration, pre-printing) and subsequent citation rates between 2020 and 2021.ResultsOne in five studies declared data were publicly available (59/306, 19%, 95% CI: 15-24%). However, when data availability was investigated this percentage dropped to 16% (49/306, 95% CI: 12-20%), and then to less than 1% (1/306, 95% CI: 0-2%) when data were checked for compliance with key FAIR principles. While only 4% of articles that used inferential statistics reported code to be available (10/274, 95% CI: 2-6%), the odds of reporting code to be available were 5.6 times higher for researchers who shared data. Compliance with mandatory data and code sharing policies was observed in 48% (14/29) and 0% (0/6) of articles, respectively. However, 88% of articles (45/51) included data availability statements when required. Policies that encouraged data sharing did not appear to be any more effective than not having a policy at all. The only factors associated with higher rates of data sharing were studying rare cancers and using publicly available data to complement original research.ConclusionsData and code sharing in oncology occurs infrequently, and at a lower rate than would be expected given the prevalence of mandatory sharing policies. There is also a large gap between those declaring data to be available, and those archiving data in a way that facilitates its reuse. We encourage journals to actively check compliance with sharing policies, and researchers consult community-accepted guidelines when archiving the products of their research.
Project description:This study informs efforts to improve the discoverability of and access to biomedical datasets by providing a preliminary estimate of the number and type of datasets generated annually by research funded by the U.S. National Institutes of Health (NIH). It focuses on those datasets that are "invisible" or not deposited in a known repository.We analyzed NIH-funded journal articles that were published in 2011, cited in PubMed and deposited in PubMed Central (PMC) to identify those that indicate data were submitted to a known repository. After excluding those articles, we analyzed a random sample of the remaining articles to estimate how many and what types of invisible datasets were used in each article.About 12% of the articles explicitly mention deposition of datasets in recognized repositories, leaving 88% that are invisible datasets. Among articles with invisible datasets, we found an average of 2.9 to 3.4 datasets, suggesting there were approximately 200,000 to 235,000 invisible datasets generated from NIH-funded research published in 2011. Approximately 87% of the invisible datasets consist of data newly collected for the research reported; 13% reflect reuse of existing data. More than 50% of the datasets were derived from live human or non-human animal subjects.In addition to providing a rough estimate of the total number of datasets produced per year by NIH-funded researchers, this study identifies additional issues that must be addressed to improve the discoverability of and access to biomedical research data: the definition of a "dataset," determination of which (if any) data are valuable for archiving and preservation, and better methods for estimating the number of datasets of interest. Lack of consensus amongst annotators about the number of datasets in a given article reinforces the need for a principled way of thinking about how to identify and characterize biomedical datasets.
Project description:The National Institutes of Health (NIH) has long supported using nonhuman primate (NHP) models for research on kidney, pancreatic islet, heart, and lung transplantation. The primary purpose of this research has been to develop new treatments for down-modulating or preventing deleterious immune responses after transplantation in human patients. Here, we discuss NIH-funded NHP studies of immune cell depletion, costimulation blockade, regulatory cell therapy, desensitization, and mixed hematopoietic chimerism that either preceded clinical trials or prevented the human application of therapies that were toxic or ineffective.
Project description:BackgroundTimely accrual of a representative sample is a key factor in whether Alzheimer's disease (AD) clinical trials successfully answer the scientific questions under study. Studies in other fields have observed that, over time, recruitment to trials has become increasingly reliant on larger numbers of sites, with declines in the average per-site recruitment rate. Here, we examined the trends in recruitment over a 20-year period of NIH-funded AD clinical trials conducted by the Alzheimer's Disease Cooperative Study (ADCS), a temporally consistent network of sites devoted to interventional research.MethodsWe performed retrospective analyses of eleven ADCS randomized clinical trials. To examine the recruitment planning, we calculated the expected number of participants to be enrolled per site for each trial. To examine the actual trial recruitment rates, we quantified the number of participants enrolled per site per month.ResultsNo effects of time were observed on recruitment planning or overall recruitment rates across trials. No trial achieved an overall recruitment rate greater than one subject per site per month. We observed the fastest recruitment rates in trials with no competition and the slowest in trials that overlapped in time. The highest recruitment rates were consistently seen early within trials and declined over the course of studies.ConclusionsTrial recruitment projections should plan for fewer than one participant randomized per site per month and consider the number of other AD trials being conducted concurrently.
Project description:IntroductionTo fulfill its mission, the NIH Office of Disease Prevention systematically monitors NIH investments in applied prevention research. Specifically, the Office focuses on research in humans involving primary and secondary prevention, and prevention-related methods. Currently, the NIH uses the Research, Condition, and Disease Categorization system to report agency funding in prevention research. However, this system defines prevention research broadly to include primary and secondary prevention, studies on prevention methods, and basic and preclinical studies for prevention. A new methodology was needed to quantify NIH funding in applied prevention research.MethodsA novel machine learning approach was developed and evaluated for its ability to characterize NIH-funded applied prevention research during fiscal years 2012-2015. The sensitivity, specificity, positive predictive value, accuracy, and F1 score of the machine learning method; the Research, Condition, and Disease Categorization system; and a combined approach were estimated. Analyses were completed during June-August 2017.ResultsBecause the machine learning method was trained to recognize applied prevention research, it more accurately identified applied prevention grants (F1 = 72.7%) than the Research, Condition, and Disease Categorization system (F1 = 54.4%) and a combined approach (F1 = 63.5%) with p<0.001.ConclusionsThis analysis demonstrated the use of machine learning as an efficient method to classify NIH-funded research grants in disease prevention.
Project description:BackgroundThe widespread adoption of smartphones provides researchers with expanded opportunities for developing, testing and implementing interventions. National Institutes of Health (NIH) funds competitive, investigator-initiated grant applications. Funded grants represent the state of the science and therefore are expected to anticipate the progression of research in the near future.ObjectiveThe objective of this paper is to provide an analysis of the kinds of smartphone-based intervention apps funded in NIH research grants during the five-year period between 2014 and 2018.MethodsWe queried NIH Reporter to identify candidate funded grants that addressed mHealth and the use of smartphones. From 1524 potential grants, we identified 397 that met the requisites of including an intervention app. Each grant's abstract was analyzed to understand the focus of intervention. The year of funding, type of activity (eg, R01, R34, and so on) and funding were noted.ResultsWe identified 13 categories of strategies employed in funded smartphone intervention apps. Most grants included either one (35.0%) or two (39.0%) intervention approaches. These included artificial intelligence (57 apps), bionic adaptation (33 apps), cognitive and behavioral therapies (68 apps), contingency management (24 apps), education and information (85 apps), enhanced motivation (50 apps), facilitating, reminding and referring (60 apps), gaming and gamification (52 apps), mindfulness training (18 apps), monitoring and feedback (192 apps), norm setting (7 apps), skills training (85 apps) and social support and social networking (59 apps). The most frequently observed grant types included Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grants (40.8%) and Research Project Grants (R01s) (26.2%). The number of grants funded increased through the five-year period from 60 in 2014 to 112 in 2018.ConclusionsSmartphone intervention apps are increasingly competitive for NIH funding. They reflect a wide diversity of approaches that have significant potential for use in applied settings.
Project description:Federal investment in survivorship science has grown markedly since the National Cancer Institute's creation of the Office of Cancer Survivorship in 1996. To describe the nature of this research, provide a benchmark, and map new directions for the future, a portfolio analysis of National Institutes of Health-wide survivorship grants was undertaken for fiscal year 2016. Applying survivorship-relevant terms, a search was conducted using the National Institutes of Health Information for Management, Planning, Analysis and Coordination grants database. Grants identified were reviewed for inclusion and categorized by grant mechanism used, funding agency, and principal investigator characteristics. Trained pairs of coders classified each grant by focus and design (observational vs interventional), population studied, and outcomes examined. A total of 215 survivorship grants were identified; 7 were excluded for lack of fit and 2 for nonresearch focus. Forty-one (19.7%) representing training grants (n = 38) or conference grants (n = 3) were not coded. Of the remaining 165 grants, most (88.5%) were funded by the National Cancer Institute; used the large, investigator-initiated (R01) mechanism (66.7%); focused on adult survivors alone (84.2%), often breast cancer survivors (47.3%); were observational in nature (57.3%); and addressed a broad array of topics, including psychosocial and physiologic outcomes, health behaviors, patterns of care, and economic/employment outcomes. Grants were led by investigators from diverse backgrounds, 28.4% of whom were early in their career. Present funding patterns, many stable since 2006, point to the need to expand research to include different cancer sites, greater ethnoculturally diverse samples, and older (>65 years) as well as longer-term (>5 years) survivors and address effects of newer therapies.
Project description:Women and racial/ethnic minority dementia caregivers have unique caregiving experiences and support needs. To ensure the identification of potentially important differences in outcomes within these groups, the amended National Institutes of Health (NIH) Policy on Inclusion of Women and Minorities mandates reporting by gender and race/ethnicity. The objective of this study was to determine the inclusion and reporting rates among NIH-funded dementia caregiver support interventions. A focused systematic literature review of studies published from 1994 to 2015 located 48 articles meeting inclusion criteria. The majority of studies included women and racial/ethnic minorities; however, 67% did not report results by gender or racial/ethnic group. Acknowledgment of underreporting was more common for race/ethnicity than gender. Our findings suggest limited NIH guideline compliance that may reflect a lack of awareness regarding potential gender disparities in caregiving roles. Ensuring NIH guideline compliance necessitates shared investments from researchers, editors, and reviewers to ensure group differences are systematically identified and reported.
Project description:PurposeImplementation science offers methods to evaluate the translation of genomic medicine research into practice. The extent to which the National Institutes of Health (NIH) human genomics grant portfolio includes implementation science is unknown. This brief report's objective is to describe recently funded implementation science studies in genomic medicine in the NIH grant portfolio, and identify remaining gaps.MethodsWe identified investigator-initiated NIH research grants on implementation science in genomic medicine (funding initiated 2012-2016). A codebook was adapted from the literature, three authors coded grants, and descriptive statistics were calculated for each code.ResultsForty-two grants fit the inclusion criteria (~1.75% of investigator-initiated genomics grants). The majority of included grants proposed qualitative and/or quantitative methods with cross-sectional study designs, and described clinical settings and primarily white, non-Hispanic study populations. Most grants were in oncology and examined genetic testing for risk assessment. Finally, grants lacked the use of implementation science frameworks, and most examined uptake of genomic medicine and/or assessed patient-centeredness.ConclusionWe identified large gaps in implementation science studies in genomic medicine in the funded NIH portfolio over the past 5 years. To move the genomics field forward, investigator-initiated research grants should employ rigorous implementation science methods within diverse settings and populations.
Project description:Survival of junior scientists in academic biomedical research is difficult in today's highly competitive funding climate. National Institute of Health (NIH) data on first-time R01 grantees indicate the rate at which early investigators drop out from a NIH-supported research career is most rapid 4 to 5 years from the first R01 award. The factors associated with a high risk of dropping out, and whether these factors impact all junior investigators equally, are unclear. We identified a cohort of 1,496 investigators who received their first R01-equivalent (R01-e) awards from the National Institute of Allergy and Infectious Diseases between 2003 and 2010, and studied all their subsequent NIH grant applications through 2016. Ultimately, 57% of the cohort were successful in obtaining new R01-e funding, despite highly competitive conditions. Among those investigators who failed to compete successfully for new funding (43%), the average time to dropping out was 5 years. Investigators who successfully obtained new grants showed remarkable within-person consistency across multiple grant submission behaviors, including submitting more applications per year, more renewal applications, and more applications to multiple NIH Institutes. Funded investigators appeared to have two advantages over their unfunded peers at the outset: they had better scores on their first R01-e grants and they demonstrated an early ability to write applications that would be scored, not triaged. The cohort rapidly segregated into two very different groups on the basis of PI consistency in the quality and frequency of applications submitted after their first R01-e award. Lastly, we identified a number of specific demographic factors, intitutional characteristics, and grant submission behaviors that were associated with successful outcomes, and assessed their predictive value and relative importance for the likelihood of obtaining additional NIH funding.