Project description:Screening and diagnostic tests are applied for the classification of people into diseased and non-diseased populations. Although diagnostic accuracy measures are used to evaluate the correctness of classification in clinical research and practice, there has been limited research on their uncertainty. The objective for this work was to develop a tool for calculating the uncertainty of diagnostic accuracy measures, as diagnostic accuracy is fundamental to clinical decision-making. For this reason, the freely available interactive program Diagnostic Uncertainty has been developed in the Wolfram language. The program provides six modules with nine submodules for calculating and plotting the standard measurement, sampling and combined uncertainty and the resultant confidence intervals of various diagnostic accuracy measures of screening or diagnostic tests, which measure a normally distributed measurand, applied at a single point in time in samples of non-diseased and diseased populations. This is done for differing sample sizes, mean and standard deviation of the measurand, diagnostic threshold and standard measurement uncertainty of the test. The application of the program is demonstrated with an illustrative example of glucose measurements in samples of diabetic and non-diabetic populations, that shows the calculation of the uncertainty of diagnostic accuracy measures. The presented interactive program is user-friendly and can be used as a flexible educational and research tool in medical decision-making, to calculate and explore the uncertainty of diagnostic accuracy measures.
Project description:SummaryDynamic programming (DP) is a general optimization strategy that is successfully used across various disciplines of science. In bioinformatics, it is widely applied in calculating the optimal alignment between pairs of protein or DNA sequences. These alignments form the basis of new, verifiable biological hypothesis. Despite its importance, there are no interactive tools available for training and education on understanding the DP algorithm. Here, we introduce an interactive computer application with a graphical interface, for the purpose of educating students about DP. The program displays the DP scoring matrix and the resulting optimal alignment(s), while allowing the user to modify key parameters such as the values in the similarity matrix, the sequence alignment algorithm version and the gap opening/extension penalties. We hope that this software will be useful to teachers and students of bioinformatics courses, as well as researchers who implement the DP algorithm for diverse applications.Availability and implementationThe software is freely available at: http:/melolab.org/sat. The software is written in the Java computer language, thus it runs on all major platforms and operating systems including Windows, Mac OS X and LINUX.ContactAll inquiries or comments about this software should be directed to Francisco Melo at fmelo@bio.puc.cl.
Project description:Screening and diagnostic tests are used to classify people with and without a disease. Diagnostic accuracy measures are used to evaluate the correctness of a classification in clinical research and practice. Although this depends on the uncertainty of measurement, there has been limited research on their relation. The objective of this work was to develop an exploratory tool for the relation between diagnostic accuracy measures and measurement uncertainty, as diagnostic accuracy is fundamental to clinical decision-making, while measurement uncertainty is critical to quality and risk management in laboratory medicine. For this reason, a freely available interactive program was developed for calculating, optimizing, plotting and comparing various diagnostic accuracy measures and the corresponding risk of diagnostic or screening tests measuring a normally distributed measurand, applied at a single point in time in non-diseased and diseased populations. This is done for differing prevalence of the disease, mean and standard deviation of the measurand, diagnostic threshold, standard measurement uncertainty of the tests and expected loss. The application of the program is illustrated with a case study of glucose measurements in diabetic and non-diabetic populations. The program is user-friendly and can be used as an educational and research tool in medical decision-making.
Project description:This study examines how well second-year nonmajor organic chemistry students are learning to draw, interpret, and understand resonance-related structures. Students were tested seven times throughout an academic year using a set of four tasks that reflected their understanding of what these structures represent and how they relate to each other. Statistical analysis was used to validate the tests, to investigate whether the tasks were mastered, and to examine possible correlations between the tasks and between each task and students' grades in the course. These data were also analyzed to determine which tasks were deemed most difficult and to identify the most common errors associated with each task. This study seeks to raise consciousness of areas that prove to be most difficult for students and that could be limiting their mastery of the resonance concept.
Project description:ObjectivesWe seek to develop a prototype software analytical tool to augment FDA regulatory reviewers' capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool.MethodsA prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use.ResultsAll usability test participants cited the tool's ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool's automated literature search relative to a manual 'all fields' PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools.ConclusionsUsability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction.
Project description:PurposePatient-specific quality assurance (QA) is very important in radiotherapy, especially for patients with highly conformed treatment plans like VMAT plans. Traditional QA protocols for these plans are time-consuming reducing considerably the time available for patient treatments. In this work, a new MC-based secondary dose check software (SciMoCa) is evaluated and benchmarked against well-established TPS (Monaco and Pinnacle3 ) by means of treatment plans and dose measurements.MethodsFifty VMAT plans have been computed using same calculation parameters with SciMoCa and the two primary TPSs. Plans were validated with measurements performed with a 3D diode detector (ArcCHECK) by translating patient plans to phantom geometry. Calculation accuracy was assessed by measuring point dose differences and gamma passing rates (GPR) from a 3D gamma analysis with 3%-2 mm criteria. Comparison between SciMoCa and primary TPS calculations was made using the same estimators and using both patient and phantom geometry plans.ResultsTPS and SciMoCa calculations were found to be in very good agreement with validation measurements with average point dose differences of 0.7 ± 1.7% and -0.2 ± 1.6% for SciMoCa and two TPSs, respectively. Comparison between SciMoCa calculations and the two primary TPS plans did not show any statistically significant difference with average point dose differences compatible with zero within error for both patient and phantom geometry plans and GPR (98.0 ± 3.0% and 99.0 ± 3.0% respectively) well in excess of the typical 95% clinical tolerance threshold.ConclusionThis work presents results obtained with a significantly larger sample than other similar analyses and, to the authors' knowledge, compares SciMoCa with a MC-based TPS for the first time. Results show that a MC-based secondary patient-specific QA is a clinically viable, reliable, and promising technique, that potentially allows significant time saving that can be used for patient treatment and a per-plan basis QA that effectively complements traditional commissioning and calibration protocols.
Project description:Heritage and space establish reciprocal relations that have been studied for decades. On the one hand, heritage has been described as an inherently spatial phenomenon. On the other hand, places are defined according to the attributes that make up their identity, among which heritage is a fundamental instrument. On the basis of the idea that education plays an important role in the socialization process, transmitted by the inherited culture, to integrate each subject within the specific community, and the notion of scale as the closest to heritage, we defined as general objectives to determine the relationships between geographic scales, heritage perspective and the didactic potential granted to heritage, within the framework of the construction of collective identities, and to contrast the perspectives of students and teachers regarding the geographical scale, heritage and their didactic potential, deducing implications for educational practices. In order to answer to these objectives, we carried out a non-experimental quantitative research, with a relational-predictive objective. Specifically, we used a survey method, being the context the whole of the local scale (Fuente Álamo, Murcia, Spain) and acting as participants all students and teachers of Secondary Education (n = 459) linked to social sciences. They answered the Test on Didactic Potentiality of Heritage according to Scale (TDPHS), and its information was analysed through different procedures (Spearman's correlations, descriptive statistics, Mann-Whitney U…), using the statistical programs SPSS. The results show, on the one hand, that the scalar perspective scores are generally low, heritage perspective is consistent with the consideration of the scales, and the perceived didactic potential in relation to heritage is related to the importance given to each of the scales; and, on the other hand, the contrast in the perspectives of students and teachers regarding the geographical scale, heritage and their didactic potential is minimal.
Project description:Investigating chromatin interactions between regulatory regions such as enhancer and promoter elements is pivotal for a deeper understanding of gene expression regulation. The emerging 3D mapping technologies focusing on enriched signals such as Hi-TrAC/Trac-looping and HiChIP, compared to Hi-C, reduce the sequencing cost, and provide higher interaction resolution for cis-regulatory elements and more comprehensive information such as chromatin accessibility. A robust pipeline is needed for the comprehensive interpretation of these data, especially for loop-centric analysis. Therefore, we have developed a new versatile tool named cLoops2 for the full-stack analysis of the 3D chromatin interaction data. cLoops2 consists of core modules of peak-calling, loop-calling, differentially enriched loops calling and loops annotation. Additionally, it also contains multiple modules to carry out interaction resolution estimation, data similarity estimation, features quantification and aggregation analysis, and visualization. cLoops2 with documentation and example data are open source and freely available at GitHub: https://github.com/YaqiangCao/cLoops2
Project description:ObjectiveDetermine if a point-based attendance system combined with longitudinal gamification is feasible and improves didactic session attendance and learner perceptions at our internal medicine residency.MethodsA prospective before-after cohort study. Weekly attendance was tracked from June 2022 through April 2023 at our university-affiliated internal medicine residency program. We implemented a point-based longitudinal game incentivizing residents to attend didactics with positive reinforcement in July 2022 (C: carrot). We added tiered positive reinforcement and positive punishment to the game in January 2023 (CS: carrot and stick). Attendance during these periods was compared to pre (P) and postintervention (S). Perceptions were assessed during the P, C, and CS periods with Likert scale ratings.ResultsCS was associated with higher attendance than other study periods (P = .002). Median attendance was P-51% (IQR 37.5-64.5), C-65% (IQR 50-74), CS-81% (IQR 78-94), and S-66% (IQR 63-71). Perceptions were similar during pre and intervention study periods, including perceptions of camaraderie (P-4.4, C-4.4, CS-4.5; P = .56), interest in attending didactic sessions (P-3.7, C-3.4, CS-3.2; P = .21), and mandate as the primary reason for attending didactics (P-3.1, C-3.1, CS-3.2; P = .96).ConclusionsA point-based attendance system combined with a longitudinal game that included tiered positive reinforcement and positive punishment was feasible and associated with higher didactic attendance but not associated with changes in resident perceptions.
Project description:Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.