Project description:Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations.
Project description:The ability to reproduce experiments is a defining principle of science. Reproducibility of clinical research has received relatively little scientific attention. However, it is important as it may inform clinical practice, research agendas, and the design of future studies.We used scoping review methods to examine reproducibility within a cohort of randomized trials examining clinical critical care research and published in the top general medical and critical care journals. To identify relevant clinical practices, we searched the New England Journal of Medicine, The Lancet, and JAMA for randomized trials published up to April 2016. To identify a comprehensive set of studies for these practices, included articles informed secondary searches within other high-impact medical and specialty journals. We included late-phase randomized controlled trials examining therapeutic clinical practices in adults admitted to general medical-surgical or specialty intensive care units (ICUs). Included articles were classified using a reproducibility framework. An original study was the first to evaluate a clinical practice. A reproduction attempt re-evaluated that practice in a new set of participants.Overall, 158 practices were examined in 275 included articles. A reproduction attempt was identified for 66 practices (42%, 95% CI 33-50%). Original studies reported larger effects than reproduction attempts (primary endpoint, risk difference 16.0%, 95% CI 11.6-20.5% vs. 8.4%, 95% CI 6.0-10.8%, P = 0.003). More than half of clinical practices with a reproduction attempt demonstrated effects that were inconsistent with the original study (56%, 95% CI 42-68%), among which a large number were reported to be efficacious in the original study and to lack efficacy in the reproduction attempt (34%, 95% CI 19-52%). Two practices reported to be efficacious in the original study were found to be harmful in the reproduction attempt.A minority of critical care practices with research published in high-profile journals were evaluated for reproducibility; less than half had reproducible effects.
Project description:Amidst a worldwide vaccination campaign, trust in science plays a significant role when addressing the COVID-19 pandemic. Given current concerns regarding research standards, we were interested in how Spanish scholars perceived COVID-19 research and the extent to which questionable research practices and potentially problematic academic incentives are commonplace. We asked researchers to evaluate the expected quality of their COVID-19 projects and other peers' research and compared these assessments with those from scholars not involved in COVID-19 research. We investigated self-admitting and estimated rates of questionable research practices and attitudes towards current research status. Responses from 131 researchers suggested that COVID-19 evaluations followed partisan lines, with scholars being more pessimistic about others' colleagues' research than their own. Additionally,researchers not involved in COVID-19 projects were more negative than their participating peers. These differences were particularly notable for areas such as the expected theoretical foundations or overall quality of the research, among others. Most Spanish scholars expected questionable research practices and inadequate incentives to be widespread. In these two aspects, researchers tended to agree regardless of their involvement in COVID-19 research. We provide specific recommendations for improving future meta-science studies, such as redefining QRPs as inadequate research practices (IRP). This change could help avoid key controversies regarding QRPs' definition while highlighting their detrimental impact. Lastly, we join previous calls to improve transparency and academic career incentives as a cornerstone for generating trust in science.Supplementary informationThe online version contains supplementary material available at 10.1007/s12144-022-02797-6.
Project description:Child sexual assault (CSA) cases reliant on uncorroborated testimony yield low conviction rates. Past research demonstrated a strong relationship between verdict and juror CSA knowledge such as typical delays in reporting by victims, and perceived victim credibility. This trial simulation experiment examined the effectiveness of interventions by an expert witness or an educative judicial direction in reducing jurors' CSA misconceptions. Participants were 885 jurors in New South Wales, Australia. After viewing a professionally acted video trial, half the jurors rendered individual verdicts and half deliberated in groups of 8-12 before completing a post-trial questionnaire. Multilevel structural equation modeling exploring the relationship between CSA knowledge and verdict demonstrated that greater CSA knowledge after the interventions increased the odds ratio to convict by itself, and that the judicial direction predicted a higher level of post-trial CSA knowledge in jurors than other expert interventions. Moreover, greater CSA knowledge was associated with heightened credibility perceptions of the complainant and a corroborating witness. At the conclusion of the trial, the more jurors knew about CSA, the higher the perceived credibility of both the complainant and her grandmother, and the more likely jurors were to convict the accused.
Project description:Genome editing tools have already revolutionized biomedical research and are also expected to have an important impact in the clinic. However, their extensive use in research has revealed much unpredictability, both off and on target, in the outcome of their application. We discuss the challenges associated with this unpredictability, both for research and in the clinic. For the former, an extensive validation of the model is essential. For the latter, potential unpredicted activity does not preclude the use of these tools but requires that molecular evidence to underpin the relevant risk:benefit evaluation is available. Safe and successful clinical application will also depend on the mode of delivery and the cellular context.
Project description:Many researchers try to understand a biological condition by identifying biomarkers. This is typically done using univariate hypothesis testing over a labeled dataset, declaring a feature to be a biomarker if there is a significant statistical difference between its values for the subjects with different outcomes. However, such sets of proposed biomarkers are often not reproducible - subsequent studies often fail to identify the same sets. Indeed, there is often only a very small overlap between the biomarkers proposed in pairs of related studies that explore the same phenotypes over the same distribution of subjects. This paper first defines the Reproducibility Score for a labeled dataset as a measure (taking values between 0 and 1) of the reproducibility of the results produced by a specified fixed biomarker discovery process for a given distribution of subjects. We then provide ways to reliably estimate this score by defining algorithms that produce an over-bound and an under-bound for this score for a given dataset and biomarker discovery process, for the case of univariate hypothesis testing on dichotomous groups. We confirm that these approximations are meaningful by providing empirical results on a large number of datasets and show that these predictions match known reproducibility results. To encourage others to apply this technique to analyze their biomarker sets, we have also created a publicly available website, https://biomarker.shinyapps.io/BiomarkerReprod/, that produces these Reproducibility Score approximations for any given dataset (with continuous or discrete features and binary class labels).
Project description:Clinical trials are the final links in the chains of knowledge and for determining the roles of therapeutic advances. Unfortunately, in an important sense they are the weakest links. This article describes two designs that are being explored today: platform trials and basket trials. Both are attempting to merge clinical research and clinical practice.
Project description:BACKGROUND:Reproducibility is the hallmark of good science. Maintaining a high degree of transparency in scientific reporting is essential not just for gaining trust and credibility within the scientific community but also for facilitating the development of new ideas. Sharing data and computer code associated with publications is becoming increasingly common, motivated partly in response to data deposition requirements from journals and mandates from funders. Despite this increase in transparency, it is still difficult to reproduce or build upon the findings of most scientific publications without access to a more complete workflow. FINDINGS:Version control systems (VCS), which have long been used to maintain code repositories in the software industry, are now finding new applications in science. One such open source VCS, Git, provides a lightweight yet robust framework that is ideal for managing the full suite of research outputs such as datasets, statistical code, figures, lab notes, and manuscripts. For individual researchers, Git provides a powerful way to track and compare versions, retrace errors, explore new approaches in a structured manner, while maintaining a full audit trail. For larger collaborative efforts, Git and Git hosting services make it possible for everyone to work asynchronously and merge their contributions at any time, all the while maintaining a complete authorship trail. In this paper I provide an overview of Git along with use-cases that highlight how this tool can be leveraged to make science more reproducible and transparent, foster new collaborations, and support novel uses.
Project description:BackgroundTrauma survivors often have to negotiate legal systems such as refugee status determination or the criminal justice system.Methods & resultsWe outline and discuss the contribution which research on trauma and related psychological processes can make to two particular areas of law where complex and difficult legal decisions must be made: in claims for refugee and humanitarian protection, and in reporting and prosecuting sexual assault in the criminal justice system.ConclusionThere is a breadth of psychological knowledge that, if correctly applied, would limit the inappropriate reliance on assumptions and myth in legal decision-making in these settings. Specific recommendations are made for further study.