Project description:Rapid and accurate laboratory diagnosis of active COVID-19 infection is one of the cornerstones of pandemic control. With the myriad of tests available in the market, the use of correct specimen type and laboratory-testing technique in the right clinical scenario could be challenging for non-specialists. In this mini-review, we will discuss the difference in diagnostic performance for different upper and lower respiratory tract specimens, and the role of blood and fecal specimens. We will analyze the performance characteristics of laboratory testing techniques of nucleic acid amplification tests, antigen detection tests, antibody detection tests, and point-of-care tests. Finally, the dynamics of viral replication and antibody production, and laboratory results interpretation in conjunction with clinical scenarios will be discussed.
Project description:ObjectivesThe present study aimed to develop a clinical decision support tool to assist coronavirus disease 2019 (COVID-19) diagnoses with machine learning (ML) models using routine laboratory test results.MethodsWe developed ML models using laboratory data (n = 1,391) composed of six clinical chemistry (CC) results, 14 CBC parameter results, and results of a severe acute respiratory syndrome coronavirus 2 real-time reverse transcription-polymerase chain reaction as a gold standard method. Four ML algorithms, including random forest (RF), gradient boosting (XGBoost), support vector machine (SVM), and logistic regression, were used to build eight ML models using CBC and a combination of CC and CBC parameters. Performance evaluation was conducted on the test data set and external validation data set from Brazil.ResultsThe accuracy values of all models ranged from 74% to 91%. The RF model trained from CC and CBC analytes showed the best performance on the present study's data set (accuracy, 85.3%; sensitivity, 79.6%; specificity, 91.2%). The RF model trained from only CBC parameters detected COVID-19 cases with 82.8% accuracy. The best performance on the external validation data set belonged to the SVM model trained from CC and CBC parameters (accuracy, 91.18%; sensitivity, 100%; specificity, 84.21%).ConclusionsML models presented in this study can be used as clinical decision support tools to contribute to physicians' clinical judgment for COVID-19 diagnoses.
Project description:To describe the relationship between the use of laboratory tests and changes in laboratory parameters in ICU patients is necessary to help optimize routine laboratory testing. A retrospective, descriptive study was conducted on the large eICU-Collaborative Research Database. The relationship between the use of routine laboratory tests (chemistry and blood counts) and changes in ten common laboratory parameters was studied. Factors associated with laboratory tests were identified in a multivariate regression analysis using generalized estimating equation Poisson models. The study included 138,734 patient stays, with an ICU mortality of 8.97%. For all parameters, the proportion of patients with at least one test decreased from day 0 to day 1 and then gradually increased until the end of the ICU stay. Paradoxically, the results of almost all tests moved toward normal values, and the daily variation in the results of almost all tests decreased over time. The presence of an arterial catheter or teaching hospitals were independently associated with an increase in the number of laboratory tests performed. The paradox of routine laboratory testing should be further explored by assessing the factors that drive the decision to perform routine laboratory testing in ICU and the impact of such testing on patient.
Project description:A growing body of evidence demonstrates that asymptomatic and pre-symptomatic transmission of SARS-CoV-2 is a major contributor to the COVID-19 pandemic. Frontline healthcare workers in COVID-19 hotspots have faced numerous challenges, including shortages of personal protective equipment (PPE) and difficulties acquiring clinical testing. The magnitude of the exposure of healthcare workers and the potential for asymptomatic transmission makes it critical to understand the incidence of infection in this population. To determine the prevalence of asymptomatic SARS-CoV-2 infection amongst healthcare workers, we studied frontline staff working in the Montefiore Health System in New York City. All participants were asymptomatic at the time of testing and were tested by RT-qPCR and for anti-SARS-CoV-2 antibodies. The medical, occupational, and COVID-19 exposure histories of participants were recorded via questionnaires. Of the 98 asymptomatic healthcare workers tested, 19 (19.4%) tested positive by RT-qPCR and/or ELISA. Within this group, four (4.1%) were RT-qPCR positive, and four (4.1%) were PCR and IgG positive. Notably, an additional 11 (11.2%) individuals were IgG positive without a positive PCR. Two PCR positive individuals subsequently developed COVID-19 symptoms, while all others remained asymptomatic at 2-week follow-up. These results indicate that there is considerable asymptomatic infection with SARS-CoV-2 within the healthcare workforce, despite current mitigation policies. Furthermore, presuming that asymptomatic staff are not carrying SARS-CoV-2 is inconsistent with our results, and this could result in amplified transmission within healthcare settings. Consequently, aggressive testing regiments, such as testing frontline healthcare workers on a regular, multi-modal basis, may be required to prevent further spread within the workforce and to patients.
Project description:Factors such as varied definitions of mortality, uncertainty in disease prevalence, and biased sampling complicate the quantification of fatality during an epidemic. Regardless of the employed fatality measure, the infected population and the number of infection-caused deaths need to be consistently estimated for comparing mortality across regions. We combine historical and current mortality data, a statistical testing model, and an SIR epidemic model, to improve estimation of mortality. We find that the average excess death across the entire US from January 2020 until February 2021 is 9[Formula: see text] higher than the number of reported COVID-19 deaths. In some areas, such as New York City, the number of weekly deaths is about eight times higher than in previous years. Other countries such as Peru, Ecuador, Mexico, and Spain exhibit excess deaths significantly higher than their reported COVID-19 deaths. Conversely, we find statistically insignificant or even negative excess deaths for at least most of 2020 in places such as Germany, Denmark, and Norway.
Project description:Abstract A considerable number of individuals infected by COVID-19 died in self-isolation. This paper uses a graphical inference method to examine if patients were endogenously assigned to self-isolation during the early phase of COVID-19 epidemic in Ontario. The endogeneity of patient assignment is evaluated from a dependence measure revealing relationships between patients’ characteristics and their location at the time of death. We test for absence of assignment endogeneity in daily samples and study the dynamic of endogeneity. This methodology is applied to patients’ characteristics, such as age, gender, location of the diagnosing health unit, presence of symptoms, and underlying health conditions.
Project description:ObjectivesCoronavirus disease 2019 was declared a global pandemic in March 2020 with correct and early detection of cases using laboratory testing central to the response. Hence, the establishment of quality management systems and monitoring their implementation are critical. This study describes the experience of implementing the COVID-19 Laboratory Testing and Certification Program (CoLTeP) in Africa.MethodsPrivate and public laboratories conducting SARS-CoV-2 testing using polymerase chain reaction were enrolled and assessed for quality and safety using the CoLTeP checklists.ResultsA total of 84 laboratories from 7 countries were assessed between April 2021 to December 2021 with 52% of these from the private sector. Among them, 64% attained 5 stars and were certified. Section 4 had the highest average score of 92% and the lowest of 78% in Section 3. Also, 82% of non-conformities (NCs) were related to sample collection, transportation, and risk assessments. Non-availability, inconsistency in performing, recording, instituting corrective actions for failed internal and external quality controls were among major NCs reported.ConclusionsLaboratories identified for SARS-CoV-2 testing by public and private institutions mostly met the requirements for quality and safe testing as measured by the CoLTeP checklist.
Project description:Objective: To distinguish COVID-19 patients and non-COVID-19 viral pneumonia patients and classify COVID-19 patients into low-risk and high-risk at admission by laboratory indicators. Materials and methods: In this retrospective cohort, a total of 3,563 COVID-19 patients and 118 non-COVID-19 pneumonia patients were included. There are two cohorts of COVID-19 patients, including 548 patients in the training dataset, and 3,015 patients in the testing dataset. Laboratory indicators were measured during hospitalization for all patients. Based on laboratory indicators, we used the support vector machine and joint random sampling to risk stratification for COVID-19 patients at admission. Based on laboratory indicators detected within the 1st week after admission, we used logistic regression and joint random sampling to develop the survival mode. The laboratory indicators of COVID-10 and non-COVID-19 were also compared. Results: We first identified the significant laboratory indicators related to the severity of COVID-19 in the training dataset. Neutrophils percentage, lymphocytes percentage, creatinine, and blood urea nitrogen with AUC >0.7 were included in the model. These indicators were further used to build a support vector machine model to classify patients into low-risk and high-risk at admission in the testing dataset. Results showed that this model could stratify the patients in the testing dataset effectively (AUC = 0.89). Our model still has good performance at different times (Mean AUC: 0.71, 0.72, 0.72, respectively for 3, 5, and 7 days after admission). Moreover, laboratory indicators detected within the 1st week after admission were able to estimate the probability of death (AUC = 0.95). We identified six indicators with permutation p < 0.05, including eosinophil percentage (p = 0.007), white blood cell count (p = 0.045), albumin (p = 0.041), aspartate transaminase (p = 0.043), lactate dehydrogenase (p = 0.002), and hemoglobin (p = 0.031). We could diagnose COVID-19 and differentiate it from other kinds of viral pneumonia based on these laboratory indicators. Conclusions: Our risk-stratification model based on laboratory indicators could help to diagnose, monitor, and predict severity at an early stage of COVID-19. In addition, laboratory findings could be used to distinguish COVID-19 and non-COVID-19.
Project description:Real-time PCR has revolutionized the way clinical microbiology laboratories diagnose many human microbial infections. This testing method combines PCR chemistry with fluorescent probe detection of amplified product in the same reaction vessel. In general, both PCR and amplified product detection are completed in an hour or less, which is considerably faster than conventional PCR detection methods. Real-time PCR assays provide sensitivity and specificity equivalent to that of conventional PCR combined with Southern blot analysis, and since amplification and detection steps are performed in the same closed vessel, the risk of releasing amplified nucleic acids into the environment is negligible. The combination of excellent sensitivity and specificity, low contamination risk, and speed has made real-time PCR technology an appealing alternative to culture- or immunoassay-based testing methods for diagnosing many infectious diseases. This review focuses on the application of real-time PCR in the clinical microbiology laboratory.