Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information.
Ontology highlight
ABSTRACT: Evidence-based algorithms can improve both lay and professional judgements and decisions, yet they remain underutilised. Research on advice taking established that humans tend to discount advice-especially when it contradicts their own judgement ("egocentric advice discounting")-but this can be mitigated by knowledge about the advisor's past performance. Advice discounting has typically been investigated using tasks with outcomes of low importance (e.g. general knowledge questions) and students as participants. Using the judge-advisor framework, we tested whether the principles of advice discounting apply in the clinical domain. We used realistic patient scenarios, algorithmic advice from a validated cancer risk calculator, and general practitioners (GPs) as participants. GPs could update their risk estimates after receiving algorithmic advice. Half of them received information about the algorithm's derivation, validation, and accuracy. We measured weight of advice and found that, on average, GPs weighed their estimates and the algorithm equally-but not always: they retained their initial estimates 29% of the time, and fully updated them 27% of the time. Updating did not depend on whether GPs were informed about the algorithm. We found a weak negative quadratic relationship between estimate updating and advice distance: although GPs integrate algorithmic advice on average, they may somewhat discount it, if it is very different from their own estimate. These results present a more complex picture than simple egocentric discounting of advice. They cast a more optimistic view of advice taking, where experts weigh algorithmic advice and their own judgement equally and move towards the advice even when it contradicts their own initial estimates.
SUBMITTER: Palfi B
PROVIDER: S-EPMC9329504 | biostudies-literature |
REPOSITORIES: biostudies-literature
ACCESS DATA