Project description:Decision analysis is a tool that clinicians can use to choose an option that maximizes the overall net benefit to a patient. It is an explicit, quantitative, and systematic approach to decision making under conditions of uncertainty. In this article, we present two teaching tips aimed at helping clinical learners understand the use and relevance of decision analysis. The first tip demonstrates the structure of a decision tree. With this tree, a clinician may identify the optimal choice among complicated options by calculating probabilities of events and incorporating patient valuations of possible outcomes. The second tip demonstrates how to address uncertainty regarding the estimates used in a decision tree. We field tested the tips twice with interns and senior residents. Teacher preparatory time was approximately 90 minutes. The field test utilized a board and a calculator. Two handouts were prepared. Learners identified the importance of incorporating values into the decision-making process as well as the role of uncertainty. The educational objectives appeared to be reached. These teaching tips introduce clinical learners to decision analysis in a fashion aimed to illustrate principles of clinical reasoning and how patient values can be actively incorporated into complex decision making.
Project description:Despite increased recognition of the importance of evidence-based assessment in clinical psychology, utilization of gold-standard practices remains low, including during diagnostic assessments. One avenue to streamline evidence-based diagnostic assessment is to increase the use of diagnostic likelihood ratios (DLRs), derived from receiver operating characteristic curve analyses. DLRs allow for the adjustment of the likelihood that an individual has a disorder based on self-report data (e.g., questionnaires, psychosocial, family history). Although DLRs provide strong and readily implementable psychometric data to guide diagnostic decision-making, analyses necessary to derive DLRs are not commonplace in psychological curriculum and available resources require familiarity with specialized statistical methodologies and software. We developed a free, researcher-oriented dashboard, shinyDLRs (https://dlrs.shinyapps.io/shinyDLRs/), to facilitate the derivation of DLRs. shinyDLRs allows researchers to carry out multiple analyses while providing descriptive interpretations of statistics derived from receiver operating characteristic curves. We present the utility of this interface as applied to several freely available measures of mood and anxiety for the purposes of guiding diagnosis of psychopathology. The sample leveraged to accomplish this goal included 576 youth, 4-19 years of age, and a parent informant, both of whom completed several questionnaires and semi-structured interviews prior to participating in treatment at a university-based research clinic. Lastly, we provide recommendations for inclusion of DLRs in future research investigating the psychometric properties and diagnostic utility of assessments. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Project description:ObjectivesTraditionally, evaluation is considered a measurement process that can be performed independently of the cultural context. However, more recently the importance of considering raters' sense-making, that is, the process by which raters assign meaning to their collective experiences, is being recognised. Thus far, the majority of the discussion on this topic has originated from Western perspectives. Little is known about the potential influence of an Asian culture on raters' sense-making. This study explored residents' sense-making associated with evaluating their clinical teachers within an Asian setting to better understand contextual dependency of validity.DesignA qualitative study using constructivist grounded theory.SettingThe Japanese Ministry of Health, Labour and Welfare has implemented a system to monitor the quality of clinical teaching within its 2-year postgraduate training programme. An evaluation instrument was developed specifically for the Japanese setting through which residents can evaluate their clinical teachers.Participants30 residents from 10 Japanese teaching hospitals with experience in evaluating their clinical teachers were sampled purposively and theoretically.MethodsWe conducted in-depth semistructured individual interviews. Sensitising concepts derived from Confucianism and principles of response process informed open, axial and selective coding.ResultsTwo themes and four subthemes were constructed. Japanese residents emphasised the awareness of their relationship with their clinical teachers (1). This awareness was fuelled by their sense of hierarchy (1a) and being part of the collective society (1b). Residents described how the meaning of evaluation (2) was coloured by their perceived role as senior (2a) and their experienced responsibility for future generations (2b).ConclusionsJapanese residents' sense-making while evaluating their clinical teachers appears to be situated and affected by Japanese cultural values. These findings contribute to a better understanding of a culture's influence on residents' sense-making of evaluation instruments and the validity argument of evaluation.
Project description:BACKGROUND: Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians' diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives. EDUCATIONAL OBJECTIVES: In this article, we present 3 teaching tips aimed at helping clinical learners use clinical prediction rules and to more accurately assess pretest probability in every day practice. The first tip is designed to demonstrate variability in physician estimation of pretest probability. The second tip demonstrates how the estimate of pretest probability influences the interpretation of diagnostic tests and patient management. The third tip exposes learners to various examples and different types of Clinical Prediction Rules (CPR) and how to apply them in practice. PILOT TESTING: We field tested all 3 tips with 16 learners, a mix of interns and senior residents. Teacher preparatory time was approximately 2 hours. The field test utilized a board and a data projector; 3 handouts were prepared. The tips were felt to be clear and the educational objectives reached. Potential teaching pitfalls were identified. CONCLUSION: Teaching with these tips will help physicians appreciate the importance of applying evidence to their every day decisions. In 2 or 3 short teaching sessions, clinicians can also become familiar with the use of CPRs in applying evidence consistently in everyday practice.
Project description:When comparing binary test results from two diagnostic systems, superiority in both "sensitivity" and "specificity" also implies differences in all conventional summary indices and locally in the underlying receiver operating characteristics (ROC) curves. However, when one of the two binary tests has higher sensitivity and lower specificity (or vice versa), comparisons of their performance levels are nontrivial and the use of different summary indices may lead to contradictory conclusions. A frequently used approach that is free of subjectivity associated with summary indices is based on the comparison of the underlying ROC curves that requires the collection of rating data using multicategory scales, whether natural or experimentally imposed. However, data for reliable estimation of ROC curves are frequently unavailable. The purpose of this article is to develop an approach of using "diagnostic likelihood ratios", namely, likelihood ratios of "positive" or "negative" responses, to make simple inferences regarding the underlying ROC curves and associated areas in the absence of reliable rating data or regarding the relative binary characteristics, when these are of primary interest.For inferences related to underlying curves, the authors exploit the assumption of concavity of the true underlying ROC curve to describe conditions under which these curves have to be different and under which the curves have different areas. For scenarios when the binary characteristics are of primary interest, the authors use characteristics of "chance performance" to demonstrate that the derived conditions provide strong evidence of superiority of one binary test as compared to another. By relating these derived conditions to hypotheses about the true likelihood ratios of two binary diagnostic tests being compared, the authors enable a straightforward statistical procedure for the corresponding inferences.The authors derived simple algebraic and graphical methods for describing the conditions for superiority of one of two diagnostic tests with respect to their binary characteristics, the underlying ROC curves, or the areas under the curves. The graphical regions are useful for identifying potential differences between two systems, which then have to be tested statistically. The simple statistical tests can be performed with well known methods for comparison of diagnostic likelihood ratios. The developed approach offers a solution for some of the more difficult to analyze scenarios, where diagnostic tests do not demonstrate concordant differences in terms of both sensitivity and specificity. In addition, the resulting inferences do not contradict the conclusions that can be obtained using conventional and reasonably defined summary indices.When binary diagnostic tests are of primary interest, the proposed approach offers an objective and powerful method for comparing two binary diagnostic tests. The significant advantage of this method is that it enables objective analyses when one test has higher sensitivity but lower specificity, while ensuring agreement with study conclusions based on other reasonable and widely acceptable summary indices. For truly multicategory diagnostic tests, the proposed method can help in concluding inferiority of one of the diagnostic tests based on binary data, thereby potentially saving the need for conducting a more expensive multicategory ROC study.
Project description:Prioritizing individual rare variants within associated genes or regions often consists of an ad hoc combination of statistical and biological considerations. From the statistical perspective, rare variants are often ranked using Fisher's exact p values, which can lead to different rankings of the same set of variants depending on whether 1- or 2-sided p values are used.We propose a likelihood ratio-based measure, maxLRc, for the statistical component of ranking rare variants under a case-control study design that avoids the hypothesis-testing paradigm. We prove analytically that the maxLRc is always well-defined, even when the data has zero cell counts in the 2×2 disease-variant table. Via simulation, we show that the maxLRc outperforms Fisher's exact p values in most practical scenarios considered. Using next-generation sequence data from 27 rolandic epilepsy cases and 200 controls in a region previously shown to be linked to and associated with rolandic epilepsy, we demonstrate that rankings assigned by the maxLRc and exact p values can differ substantially.The maxLRc provides reliable statistical prioritization of rare variants using only the observed data, avoiding the need to specify parameters associated with hypothesis testing that can result in ranking discrepancies across p value procedures; and it is applicable to common variant prioritization.
Project description:Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines "unusually long" or "unusually few". Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules.
Project description:Receiver operating characteristic (ROC) analysis is widely used to describe the discriminatory power of a diagnostic test to differentiate between populations having or not having a specific disease, using a dichotomous threshold. In this way, positive and negative likelihood ratios (LR+ and LR-) can be calculated to be used in Bayes' way of estimating disease probabilities. Similarly, LRs can be calculated for certain ranges of test results. However, since many diagnostic tests are of quantitative nature, it would be desirable to estimate LRs for each quantitative result. These LRs are equal to the slope of the tangent to the ROC curve at the corresponding point. Since the exact distribution of test results in diseased and non-diseased people is often not known, the calculation of such LRs for quantitative test results is not straightforward. Here, a simple distribution-independent method is described to reach this goal using Bézier curves that are defined by tangents to a curve. The use of such a method would help in standardizing quantitative test results, which are not always comparable between different test providers, by reporting them as LRs for a specific diagnosis, in addition to, or instead of, quantities such as mg/L or nmol/L, or even indices or units.