Project description:AimsTo validate the diagnostic accuracy of the Augurix SARS-CoV-2 IgM/IgG rapid immunoassay diagnostic test (RDT) for COVID-19.MethodsIn this unmatched 1:1 case-control study, blood samples from 46 real-time RT-PCR-confirmed SARS-CoV-2 hospitalized cases and 45 healthy donors (negative controls) were studied. Diagnostic accuracy of the IgG RDT was assessed against both an in-house recombinant spike-expressing immunofluorescence assay (rIFA), as an established reference method (primary endpoint), and the Euroimmun SARS-CoV-2 IgG enzyme-linked immunosorbent assays (ELISA) (secondary endpoint).ResultsCOVID-19 patients were more likely to be male (61% vs 20%; P = .0001) and older (median 66 vs 47 years old; P < .001) than controls. Whole blood IgG-RDT results showed 86% and 93% overall Kendall concordance with rIFA and IgG ELISA, respectively. IgG RDT performances were similar between plasma and whole blood. Overall, RDT sensitivity was 88% (95% confidence interval [95%CI]: 70-96), specificity 98% (95%CI: 90-100), PPV 97% (95%CI: 80-100) and NPV 94% (95%CI: 84-98). The IgG-RDT carried out from 0 to 6 days, 7 to 14 days and > 14 days after the SARS-CoV-2 RT-PCR test displayed 30%, 73% and 100% positivity rates in the COVID-19 group, respectively. When considering samples taken >14 days after RT-PCR diagnosis, NPV was 100% (95%CI:90-100), and PPV was 100% (95%CI:72-100).ConclusionsThe Augurix IgG-RDT done in whole blood displays a high diagnostic accuracy for SARS-CoV-2 IgG in high COVID-19 prevalence settings, where its use could be considered in the absence of routine diagnostic serology facilities.
Project description:BackgroundSince the beginning of the COVID-19 pandemic, researchers and health authorities have sought to identify the different parameters that drive its local transmission cycles to make better decisions regarding prevention and control measures. Different modeling approaches have been proposed in an attempt to predict the behavior of these local cycles.ObjectiveThis paper presents a framework to characterize the different variables that drive the local, or epidemic, cycles of the COVID-19 pandemic, in order to provide a set of relatively simple, yet efficient, statistical tools to be used by local health authorities to support decision making.MethodsVirtually closed cycles were compared to cycles in progress from different locations that present similar patterns in the figures that describe them. With the aim to compare populations of different sizes at different periods of time and locations, the cycles were normalized, allowing an analysis based on the core behavior of the numerical series. A model for the reproduction number was derived from the experimental data, and its performance was presented, including the effect of subnotification (ie, underreporting). A variation of the logistic model was used together with an innovative inventory model to calculate the actual number of infected persons, analyze the incubation period, and determine the actual onset of local epidemic cycles.ResultsThe similarities among cycles were demonstrated. A pattern between the cycles studied, which took on a triangular shape, was identified and used to make predictions about the duration of future cycles. Analyses on effective reproduction number (Rt) and subnotification effects for Germany, Italy, and Sweden were presented to show the performance of the framework introduced here. After comparing data from the three countries, it was possible to determine the probable dates of the actual onset of the epidemic cycles for each country, the typical duration of the incubation period for the disease, and the total number of infected persons during each cycle. In general terms, a probable average incubation time of 5 days was found, and the method used here was able to estimate the end of the cycles up to 34 days in advance, while demonstrating that the impact of the subnotification level (ie, error) on the effective reproduction number was <5%.ConclusionsIt was demonstrated that, with relatively simple mathematical tools, it is possible to obtain a reliable understanding of the behavior of COVID-19 local epidemic cycles, by introducing an integrated framework for identifying cycle patterns and calculating the variables that drive it, namely: the Rt, the subnotification effects on estimations, the most probable actual cycles start dates, the total number of infected, and the most likely incubation period for SARS-CoV-2.
Project description:BackgroundEgypt was among the first 10 countries in Africa that experienced COVID-19 cases. The sudden surge in the number of cases is overwhelming the capacity of the national healthcare system, particularly in developing countries. Central to the containment of the ongoing pandemic is the availability of rapid and accurate diagnostic tests that could pinpoint patients at early disease stages. In the current study, we aimed to (1) Evaluate the diagnostic performance of the rapid antigen test (RAT) "Standard™ Q COVID-19 Ag" against reverse transcriptase quantitative real-time PCR (RT-qPCR) in eighty-three swabs collected from COVID-19 suspected individuals showing various demographic features, clinical and radiological findings. (2) Test whether measuring laboratory parameters in participant's blood would enhance the predictive accuracy of RAT. (3) Identify the most important features that determine the results of both RAT and RT-qPCR.MethodsDiagnostic measurements (e.g. sensitivity, specificity, etc.) and receiver operating characteristic curve were used to assess the clinical performance of "Standard™ Q COVID-19 Ag". We used the support vector machine (SVM) model to investigate whether measuring laboratory indices would enhance the accuracy of RAT. Moreover, a random forest classification model was used to determine the most important determinants of the results of RAT and RT-qPCR for COVID-19 diagnosis.ResultsThe sensitivity, specificity, and accuracy of RAT were 78.2, 64.2, and 75.9%, respectively. Samples with high viral load and those that were collected within one-week post-symptoms showed the highest sensitivity and accuracy. The SVM modeling showed that measuring laboratory indices did not enhance the predictive accuracy of RAT.Conclusion"Standard™ Q COVID-19 Ag" should not be used alone for COVID-19 diagnosis due to its low diagnostic performance relative to the RT-qPCR. RAT is best used at the early disease stage and in patients with high viral load.
Project description:Timely detection of an evolving event of an infectious disease with superspreading potential is imperative for territory-wide disease control as well as preventing future outbreaks. While the reproduction number (R) is a commonly-adopted metric for disease transmissibility, the transmission heterogeneity quantified by dispersion parameter k, a metric for superspreading potential is seldom tracked. In this study, we developed an estimation framework to track the time-varying risk of superspreading events (SSEs) and demonstrated the method using the three epidemic waves of COVID-19 in Hong Kong. Epidemiological contact tracing data of the confirmed COVID-19 cases from 23 January 2020 to 30 September 2021 were obtained. By applying branching process models, we jointly estimated the time-varying R and k. Individual-based outbreak simulations were conducted to compare the time-varying assessment of the superspreading potential with the typical non-time-varying estimate of k over a period of time. We found that the COVID-19 transmission in Hong Kong exhibited substantial superspreading during the initial phase of the epidemics, with only 1 % (95 % Credible interval [CrI]: 0.6-2 %), 5 % (95 % CrI: 3-7 %) and 10 % (95 % CrI: 8-14 %) of the most infectious cases generated 80 % of all transmission for the first, second and third epidemic waves, respectively. After implementing local public health interventions, R estimates dropped gradually and k estimates increased thereby reducing the risk of SSEs to approaching zero. Outbreak simulations indicated that the non-time-varying estimate of k may overlook the possibility of large outbreaks. Hence, an estimation of the time-varying k as a compliment of R as a monitoring of both disease transmissibility and superspreading potential, particularly when public health interventions were relaxed is crucial for minimizing the risk of future outbreaks.
Project description:Timely diagnostic testing for active SARS-CoV-2 viral infections is key to controlling the spread of the virus and preventing severe disease. A central public health challenge is defining test allocation strategies with limited resources. In this paper, we provide a mathematical framework for defining an optimal strategy for allocating viral diagnostic tests. The framework accounts for imperfect test results, selective testing in certain high-risk patient populations, practical constraints in terms of budget and/or total number of available tests, and the purpose of testing. Our method is not only useful for detecting infections, but can also be used for long-time surveillance to detect new outbreaks. In our proposed approach, tests can be allocated across population strata defined by symptom severity and other patient characteristics, allowing the test allocation plan to prioritize higher risk patient populations. We illustrate our framework using historical data from the initial wave of the COVID-19 outbreak in New York City. We extend our proposed method to address the challenge of allocating two different types of diagnostic tests with different costs and accuracy, for example, the RT-PCR and the rapid antigen test (RAT), under budget constraints. We show how this latter framework can be useful to reopening of college campuses where university administrators are challenged with finite resources for community surveillance. We provide a R Shiny web application allowing users to explore test allocation strategies across a variety of pandemic scenarios. This work can serve as a useful tool for guiding public health decision-making at a community level and adapting testing plans to different stages of an epidemic. The conceptual framework has broader relevance beyond the current COVID-19 pandemic.
Project description:The WHO-named Coronavirus Disease 2019 (COVID-19) infection had become a pandemic within a short time period since it was detected in Wuhan. The outbreak required the screening of millions of samples daily and overwhelmed diagnostic laboratories worldwide. During this pandemic, the handling of patient specimens according to the universal guidelines was extremely difficult as the WHO, CDC and ECDC required cold chain compliance during transport and storage of the swab samples. The aim of this study was to compare the effects of two different storage conditions on the COVID-19 real-time PCR assay on 30 positive nasopharyngeal and/or oropharyngeal samples stored at both ambient temperature (22 ± 2 °C) and +4 °C. The results revealed that all the samples stored at ambient temperature remain PCR positive for at least six days without any false-negative result. In conclusion, transporting and storing these types of swab samples at ambient temperature for six days under resource-limited conditions during the COVID-19 pandemics are acceptable.
Project description:BackgroundTata MD CHECK SARS-CoV-2 kit 1.0, a CRISPR based reverse transcription PCR (TMC-CRISPR) test was approved by Indian Council of Medical Research (ICMR) for COVID-19 diagnosis in India. To determine the potential for rapid roll-out of this test, we conducted performance characteristic and an operational feasibility assessment (OFA) at a tertiary care setting.InterventionThe study was conducted at an ICMR approved COVID-19 RT-PCR laboratory of King Edward Memorial (KEM) hospital, Mumbai, India. The TMC-CRISPR test was evaluated against the gold-standard RT-PCR test using the same RNA sample extracted from fresh and frozen clinical specimens collected from COVID-19 suspects for routine diagnosis. TMC-CRISPR results were determined manually and using the Tata MD CHECK application. An independent agency conducted interviews of relevant laboratory staff and supervisors for OFA.ResultsOverall, 2,332 (fresh: 2,121, frozen: 211) clinical specimens were analysed of which, 140 (6%) were detected positive for COVID-19 by TMC-CRISPR compared to 261 (11%) by RT-PCR. Overall sensitivity and specificity of CRISPR was 44% (95% CI: 38.1%-50.1%) and 99% (95% CI: 98.2%-99.1%) respectively when compared to RT-PCR. Discordance between TMC-CRISPR and RT-PCR results increased with increasing Ct values and corresponding decreasing viral load (range: <20% to >85%). In the OFA, all participants indicated no additional requirements of training to set up RT PCR. However, extra post-PCR steps such as setting up the CRISPR reaction and handling of detection strips were time consuming and required special training. No significant difference was observed between manual and mobile app-based readings. However, issues such as erroneous results, difficulty in interpretation of faint bands, internet connectivity, data safety and security were highlighted as challenges with the app-based readings.ConclusionThe evaluated version-Tata MD CHECK SARS-CoV-2 kit 1.0 of TMC-CRISPR test cannot be considered as an alternative to the RT-PCR. There is a definite scope for improvement in this assay.
Project description:Timely and accurate laboratory testing is essential for managing the global COVID-19 pandemic. Reverse transcription polymerase chain reaction remains the gold-standard for SARS-CoV-2 diagnosis, but several practical issues limit the test's use. Immunoassays have been indicated as an alternative for individual and mass testing.ObjectivesTo access the performance of 12 serological tests for COVID-19 diagnosis.MethodsWe conducted a blind evaluation of six lateral-flow immunoassays (LFIAs) and six enzyme-linked immunosorbent assays (ELISAs) commercially available in Brazil for detecting anti-SARS-CoV-2 antibodies.ResultsConsidering patients with seven or more days of symptoms, the sensitivity ranged from 59.5% to 83.1% for LFIAs and from 50.7% to 92.6% for ELISAs. For both methods, the sensitivity increased with clinical severity and days of symptoms. The agreement among LFIAs performed with digital blood and serum was moderate. Specificity was, in general, higher for LFIAs than for ELISAs. Infectious diseases prevalent in the tropics, such as HIV, leishmaniasis, arboviruses, and malaria, represent conditions with the potential to cause false-positive results with these tests, which significantly compromises their specificity.ConclusionThe performance of immunoassays was only moderate, affected by the duration and clinical severity of the disease. Absence of discriminatory power between IgM/IgA and IgG has also been demonstrated, which prevents the use of acute-phase antibodies for decisions on social isolation.