Project description:The ability to make inferences based on statistical information has so far been tested only in animals having large brains in relation to their body size, like primates and parrots. Here we tested if giraffes (Giraffa camelopardalis), despite having a smaller relative brain size, can rely on relative frequencies to predict sampling outcomes. We presented them with two transparent containers filled with different quantities of highly-liked food and less-preferred food. The experimenter covertly drew one piece of food from each container, and let the giraffe choose between the two options. In the first task, we varied the quantity and relative frequency of highly-liked and less-preferred food pieces. In the second task, we inserted a physical barrier in both containers, so giraffes only had to take into account the upper part of the container when predicting the outcome. In both tasks giraffes successfully selected the container more likely to provide the highly-liked food, integrating physical information to correctly predict sampling information. By ruling out alternative explanations based on simpler quantity heuristics and learning processes, we showed that giraffes can make decisions based on statistical inferences.
Project description:ObjectiveWe completed a systematic review of information reported as included in decision aids (DAs) for adult patients, to determine if it is complete, balanced and accurate.Search strategyDAs were identified using the Cochrane Database of DAs and searches of four electronic databases using the terms: 'decision aid'; shared decision making' and 'patients'; 'multimedia or leaflets or pamphlets or videos and patients and decision making'. Additionally, publications reporting DA development and actual DAs that were reported as publicly available on the Internet were consulted. Publications were included up to May 2006.Data extractionData were extracted on the following variables: external groups consulted in development of the DA, type of study used, categories of information, inclusion of probabilities, use of citation lists and inclusion of patient experiences.Main results68 treatment DAs and 30 screening DAs were identified. 17% of treatment DAs and 47% of screening DAs did not report any external consultation and, of those that did, DA producers tended to rely more heavily on medical experts than on patients' guidance. Content evaluations showed that (i) treatment DAs frequently omit describing the procedure(s) involved in treatment options and (ii) screening DAs frequently focus on false positives but not false negatives. About 1/2 treatment DAs reported probabilities with a greater emphasis on potential benefits than harms. Similarly, screening DAs were more likely to provide false-positive than false-negative rates.ConclusionsThe review led us to be concerned about completeness, balance and accuracy of information included in DAs.
Project description:Observed associations between events can be validated by statistical information of reliability or by testament of communicative sources. We tested whether toddlers learn from their own observation of efficiency, assessed by statistical information on reliability of interventions, or from communicatively presented demonstration, when these two potential types of evidence of validity of interventions on a novel artifact are contrasted with each other. Eighteen-month-old infants observed two adults, one operating the artifact by a method that was more efficient (2/3 probability of success) than that of the other (1/3 probability of success). Compared to the Baseline condition, in which communicative signals were not employed, infants tended to choose the less reliable method to operate the artifact when this method was demonstrated in a communicative manner in the Experimental condition. This finding demonstrates that, in certain circumstances, communicative sanctioning of reliability may override statistical evidence for young learners. Such a bias can serve fast and efficient transmission of knowledge between generations.
Project description:ObjectiveTo conduct a systematic review of randomised trials of patient decision aids in improving decision making and outcomes.DesignWe included randomised trials of interventions providing structured, detailed, and specific information on treatment or screening options and outcomes to aid decision making. Two reviewers independently screened and extracted data on several evaluation criteria. Results were pooled by using weighted mean differences and relative risks.Results17 studies met the inclusion criteria. Compared with the controls, decision aids produced higher knowledge scores (weighted mean difference=19/100, 95% confidence interval 14 to 25); lower decisional conflict scores (weighted mean difference=-0.3/5, -0.4 to -0.1); more active patient participation in decision making (relative risk = 2.27, 95% confidence interval 1. 3 to 4); and no differences in anxiety, satisfaction with decisions (weighted mean difference=0.6/100, -3 to 4), or satisfaction with the decision making process (2/100,-3 to 7). Decision aids had a variable effect on decisions. When complex decision aids were compared with simpler versions, they were better at reducing decisional conflict, improved knowledge marginally, but did not affect satisfaction.ConclusionsDecision aids improve knowledge, reduce decisional conflict, and stimulate patients to be more active in decision making without increasing their anxiety. Decision aids have little effect on satisfaction and a variable effect on decisions. The effects on outcomes of decisions (persistence with choice, quality of life) remain uncertain.
Project description:BackgroundCancer patients often do not make informed decisions regarding clinical trial participation. This study evaluated whether a web-based decision aid (DA) could support trial decisions compared with our cancer center's website.MethodsAdults diagnosed with cancer in the past 6 months who had not previously participated in a cancer clinical trial were eligible. Participants were randomized to view the DA or our cancer center's website (enhanced usual care [UC]). Controlling for whether participants had heard of cancer clinical trials and educational attainment, multivariable linear regression examined group on knowledge, self-efficacy for finding trial information, decisional conflict (values clarity and uncertainty), intent to participate, decision readiness, and trial perceptions.ResultsTwo hundred patients (86%) consented between May 2014 and April 2015. One hundred were randomized to each group. Surveys were completed by 87 in the DA group and 90 in the UC group. DA group participants reported clearer values regarding trial participation than UC group participants reported (least squares [LS] mean = 15.8 vs. 32, p < .0001) and less uncertainty (LS mean = 24.3 vs. 36.4, p = .025). The DA group had higher objective knowledge than the UC group's (LS mean = 69.8 vs. 55.8, p < .0001). There were no differences between groups in intent to participate.ConclusionsImprovements on key decision outcomes including knowledge, self-efficacy, certainty about choice, and values clarity among participants who viewed the DA suggest web-based DAs can support informed decisions about trial participation among cancer patients facing this preference-sensitive choice. Although better informing patients before trial participation could improve retention, more work is needed to examine DA impact on enrollment and retention.Implications for practiceThis paper describes evidence regarding a decision tool to support patients' decisions about trial participation. By improving knowledge, helping patients clarify preferences for participation, and facilitating conversations about trials, decision aids could lead to decisions about participation that better match patients' preferences, promoting patient-centered care and the ethical conduct of clinical research.
Project description:BackgroundSymptom checker apps are patient-facing decision support systems aimed at providing advice to laypersons on whether, where, and how to seek health care (disposition advice). Such advice can improve laypersons' self-assessment and ultimately improve medical outcomes. Past research has mainly focused on the accuracy of symptom checker apps' suggestions. To support decision-making, such apps need to provide not only accurate but also trustworthy advice. To date, only few studies have addressed the question of the extent to which laypersons trust symptom checker app advice or the factors that moderate their trust. Studies on general decision support systems have shown that framing automated systems (anthropomorphic or emphasizing expertise), for example, by using icons symbolizing artificial intelligence (AI), affects users' trust.ObjectiveThis study aims to identify the factors influencing laypersons' trust in the advice provided by symptom checker apps. Primarily, we investigated whether designs using anthropomorphic framing or framing the app as an AI increases users' trust compared with no such framing.MethodsThrough a web-based survey, we recruited 494 US residents with no professional medical training. The participants had to first appraise the urgency of a fictitious patient description (case vignette). Subsequently, a decision aid (mock symptom checker app) provided disposition advice contradicting the participants' appraisal, and they had to subsequently reappraise the vignette. Participants were randomized into 3 groups: 2 experimental groups using visual framing (anthropomorphic, 160/494, 32.4%, vs AI, 161/494, 32.6%) and a neutral group without such framing (173/494, 35%).ResultsMost participants (384/494, 77.7%) followed the decision aid's advice, regardless of its urgency level. Neither anthropomorphic framing (odds ratio 1.120, 95% CI 0.664-1.897) nor framing as AI (odds ratio 0.942, 95% CI 0.565-1.570) increased behavioral or subjective trust (P=.99) compared with the no-frame condition. Even participants who were extremely certain in their own decisions (ie, 100% certain) commonly changed it in favor of the symptom checker's advice (19/34, 56%). Propensity to trust and eHealth literacy were associated with increased subjective trust in the symptom checker (propensity to trust b=0.25; eHealth literacy b=0.2), whereas sociodemographic variables showed no such link with either subjective or behavioral trust.ConclusionsContrary to our expectation, neither the anthropomorphic framing nor the emphasis on AI increased trust in symptom checker advice compared with that of a neutral control condition. However, independent of the interface, most participants trusted the mock app's advice, even when they were very certain of their own assessment. Thus, the question arises as to whether laypersons use such symptom checkers as substitutes rather than as aids in their own decision-making. With trust in symptom checkers already high at baseline, the benefit of symptom checkers depends on interface designs that enable users to adequately calibrate their trust levels during usage.Trial registrationDeutsches Register Klinischer Studien DRKS00028561; https://tinyurl.com/rv4utcfb (retrospectively registered).
Project description:Background. Overdiagnosis is an accepted harm of cancer screening, but studies of prostate cancer screening decision aids have not examined provision of information important in communicating the risk of overdiagnosis, including overdiagnosis frequency, competing mortality risk, and the high prevalence of indolent cancers in the population. Methods. We undertook a comprehensive review of all publicly available decision aids for prostate cancer screening, published in (or translated to) the English language, without date restrictions. We included all decision aids from a recent systematic review and screened excluded studies to identify further relevant decision aids. We used a Google search to identify further decision aids not published in peer reviewed medical literature. Two reviewers independently screened the decision aids and extracted information on communication of overdiagnosis. Disagreements were resolved through discussion or by consulting a third author. Results. Forty-one decision aids were included out of the 80 records identified through the search. Most decision aids (n = 32, 79%) did not use the term overdiagnosis but included a description of it (n = 38, 92%). Few (n = 7, 17%) reported the frequency of overdiagnosis. Little more than half presented the benefits of prostate cancer screening before the harms (n = 22, 54%) and only 16, (39%) presented information on competing risks of mortality. Only 2 (n = 2, 5%) reported the prevalence of undiagnosed prostate cancer in the general population. Conclusion. Most patient decision aids for prostate cancer screening lacked important information on overdiagnosis. Specific guidance is needed on how to communicate the risks of overdiagnosis in decision aids, including appropriate content, terminology and graphical display.HighlightsMost patient decision aids for prostate cancer screening lacks important information on overdiagnosis.Specific guidance is needed on how to communicate the risks of overdiagnosis.
Project description:BackgroundsResearch on shared decision-making (SDM) has mainly focused on decisions about treatment (e.g., medication or surgical procedures). Little is known about the decision-making process for the numerous other decisions in consultations.ObjectivesWe assessed to what extent patients are actively involved in different decision types in medical specialist consultations and to what extent this was affected by medical specialist, patient, and consultation characteristics.DesignAnalysis of video-recorded encounters between medical specialists and patients at a large teaching hospital in the Netherlands.ParticipantsForty-one medical specialists (28 male) from 18 specialties, and 781 patients.Main measureTwo independent raters classified decisions in the consultations in decision type (main or other) and decision category (diagnostic tests, treatment, follow-up, or other advice) and assessed the decision-making behavior for each decision using the Observing Patient Involvement (OPTION)5 instrument, ranging from 0 (no SDM) to 100 (optimal SDM). Scheduled and realized consultation duration were recorded.Key resultIn the 727 consultations, the mean (SD) OPTION5 score for the main decision was higher (16.8 (17.1)) than that for the other decisions (5.4 (9.0), p < 0.001). The main decision OPTION5 scores for treatment decisions (n = 535, 19.2 (17.3)) were higher than those for decisions about diagnostic tests (n = 108, 14.6 (16.8)) or follow-up (n = 84, 3.8 (8.1), p < 0.001). This difference remained significant in multilevel analyses. Longer consultation duration was the only other factor significantly associated with higher OPTION5 scores (p < 0.001).ConclusionMost of the limited patient involvement was observed in main decisions (versus others) and in treatment decisions (versus diagnostic, follow-up, and advice). SDM was associated with longer consultations. Physicians' SDM training should help clinicians to tailor promotion of patient involvement in different types of decisions. Physicians and policy makers should allow sufficient consultation time to support the application of SDM in clinical practice.
Project description:Experts believe that increasing the low uptake of screening for colorectal cancer (CRC) requires educating patients about all approved tests and helping them choose one that fits their preferences. As one motto puts it: "The best test is the one that gets done." Screening tests range from more invasive and very sensitive for polyps and cancer (colonoscopy) to less invasive and less sensitive (e.g., fecal immunochemical testing (FIT)). But it is unclear how best to educate patients about the options and the tradeoffs involved. Some guidelines recommend that decision aids, a promising tool in this area, provide patients with detailed quantitative information, including baseline risk, risk reduction, and chance of negative outcomes. But this sort of "comparative effectiveness" data can confuse patients, especially those with limited mathematical ability. Previous studies have not measured the effect of providing quantitative information to patients with varying levels of ability or interest or asked them whether such data is essential for their decision-making.
The investigators will conduct a clinical trial to determine the impact on patients who view a decision aid (DA) that includes quantitative information versus a DA without such data. The investigators will also seek to determine whether numeracy moderates the effect of quantitative information.