Project description:Two questions regarding the scientific literature have become grist for public discussion: 1) what place should P values have in reporting the results of studies? 2) How should the perceived difficulty in replicating the results reported in published studies be addressed? We consider these questions to be 2 sides of the same coin; failing to address them can lead to an incomplete or incorrect message being sent to the reader. If P values (which are derived from the estimate of the effect size and a measure of the precision of the estimate of the effect) are used improperly, for example reporting only significant findings, or reporting P values without account for multiple comparisons, or failing to indicate the number of tests performed, the scientific record can be biased. Moreover, if there is a lack of transparency in the conduct of a study and reporting of study results, it will not be possible to repeat a study in a manner that allows inferences from the original study to be reproduced or to design and conduct a different experiment whose aim is to confirm the original study's findings. The goal of this article is to discuss how P values can be used in a manner that is consistent with the scientific method, and to increase transparency and reproducibility in the conduct and analysis of nutrition research.
Project description:The 2017 American College of Neuropychopharmacology (ACNP) conference hosted a Study Group on 4 December 2017, Establishing best practice guidelines to improve the rigor, reproducibility, and transparency of the maternal immune activation (MIA) animal model of neurodevelopmental abnormalities. The goals of this session were to (a) evaluate the current literature and establish a consensus on best practices to be implemented in MIA studies, (b) identify remaining research gaps warranting additional data collection and lend to the development of evidence-based best practice design, and (c) inform the MIA research community of these findings. During this session, there was a detailed discussion on the importance of validating immunogen doses and standardizing the general design (e.g., species, immunogenic compound used, housing) of our MIA models both within and across laboratories. The consensus of the study group was that data does not currently exist to support specific evidence-based model selection or methodological recommendations due to lack of consistency in reporting, and that this issue extends to other inflammatory models of neurodevelopmental abnormalities. This launched a call to establish a reporting checklist focusing on validation, implementation, and transparency modeled on the ARRIVE Guidelines and CONSORT (scientific reporting guidelines for animal and clinical research, respectively). Here we provide a summary of the discussions in addition to a suggested checklist of reporting guidelines needed to improve the rigor and reproducibility of this valuable translational model, which can be adapted and applied to other animal models as well.
Project description:Background: In response to the COVID-19 pandemic, our microbial diagnostic laboratory located in a university hospital has implemented several distinct SARS-CoV-2 RT-PCR systems in a very short time. More than 148,000 tests have been performed over 12 months, which represents about 405 tests per day, with peaks to more than 1,500 tests per days during the second wave. This was only possible thanks to automation and digitalization, to allow high throughput, acceptable time to results and to maintain test reliability. An automated dashboard was developed to give access to Key Performance Indicators (KPIs) to improve laboratory operational management. Methods: RT-PCR data extraction of four respiratory viruses-SARS-CoV-2, influenza A and B and RSV-from our laboratory information system (LIS), was automated. This included age, gender, test result, RT-PCR instrument, sample type, reception time, requester, and hospitalization status etc. Important KPIs were identified and the visualization was achieved using an in-house dashboard based on the open-source language R (Shiny). Results: The dashboard is organized into three main parts. The "Filter" page presents all the KPIs, divided into five sections: (i) general and gender-related indicators, (ii) number of tests and positivity rate, (iii) cycle threshold and viral load, (iv) test durations, and (v) not valid results. Filtering allows to select a given period, a dedicated instrument, a given specimen, an age range or a requester. The "Comparison" page allows a custom charting of all the available variables, which represents more than 182 combination. The "Data" page, gives the user an access to the raw data in tables format, with possibility of filtering, allowing for a deeper analysis and data download. Informations are updated every 4 h. Conclusions: By giving a rapid access to a huge number of up-to-date information, represented using the most relevant visualization types, without the burden of timely data extraction and analysis, the dashboard represents a reliable and user-friendly tool for operational laboratory management. The dashboard represents a reliable and user-friendly tool improving the decision-making process, resource planning and quality management.
Project description:Many publications on COVID-19 were released on preprint servers such as medRxiv and bioRxiv. It is unknown how reliable these preprints are, and which ones will eventually be published in scientific journals. In this study, we use crowdsourced human forecasts to predict publication outcomes and future citation counts for a sample of 400 preprints with high Altmetric score. Most of these preprints were published within 1 year of upload on a preprint server (70%), with a considerable fraction (45%) appearing in a high-impact journal with a journal impact factor of at least 10. On average, the preprints received 162 citations within the first year. We found that forecasters can predict if preprints will be published after 1 year and if the publishing journal has high impact. Forecasts are also informative with respect to Google Scholar citations within 1 year of upload on a preprint server. For both types of assessment, we found statistically significant positive correlations between forecasts and observed outcomes. While the forecasts can help to provide a preliminary assessment of preprints at a faster pace than traditional peer-review, it remains to be investigated if such an assessment is suited to identify methodological problems in preprints.