Project description:Children's sharing decisions are shaped by recipient characteristics such as need and reputation, yet studies often focus on one characteristic at a time. This research examines how combinations of recipient characteristics impact costly sharing decisions among 3- to 9-year-old children (N = 186). Children were informed about the material need (needy or not needy) and reputation (sharing or not sharing) of potential recipients before having the opportunity to share stickers with them. Results indicated that sharing was higher when the recipient was needy and increased more when the recipient had a reputation for sharing. Children shared over half of their stickers with a needy, sharing recipient, and less than half with a not needy, not sharing recipient. Children shared equally with recipients who were needy and not sharing or not needy and sharing, suggesting no preference for either characteristic. To explore the emotional benefits of sharing, children rated their own and the recipient's mood before and after sharing, showing a greater increase in ratings of the recipient's mood when more resources were shared. These findings suggest that children consider multiple recipient characteristics in their sharing decisions, demonstrating altruism toward those in need and indirectly reciprocating past sharing based on reputation.
Project description:This study investigated the utility of supervised machine learning (SML) and explainable artificial intelligence (AI) techniques for modeling and understanding human decision-making during multiagent task performance. Long short-term memory (LSTM) networks were trained to predict the target selection decisions of expert and novice players completing a multiagent herding task. The results revealed that the trained LSTM models could not only accurately predict the target selection decisions of expert and novice players but that these predictions could be made at timescales that preceded a player's conscious intent. Importantly, the models were also expertise specific, in that models trained to predict the target selection decisions of experts could not accurately predict the target selection decisions of novices (and vice versa). To understand what differentiated expert and novice target selection decisions, we employed the explainable-AI technique, SHapley Additive explanation (SHAP), to identify what informational features (variables) most influenced modelpredictions. The SHAP analysis revealed that experts were more reliant on information about target direction of heading and the location of coherders (i.e., other players) compared to novices. The implications and assumptions underlying the use of SML and explainable-AI techniques for investigating and understanding human decision-making are discussed.
Project description:The strain on healthcare resources brought forth by the recent COVID-19 pandemic has highlighted the need for efficient resource planning and allocation through the prediction of future consumption. Machine learning can predict resource utilization such as the need for hospitalization based on past medical data stored in electronic medical records (EMR). We conducted this study on 3194 patients (46% male with mean age 56.7 (±16.8), 56% African American, 7% Hispanic) flagged as COVID-19 positive cases in 12 centers under Emory Healthcare network from February 2020 to September 2020, to assess whether a COVID-19 positive patient's need for hospitalization can be predicted at the time of RT-PCR test using the EMR data prior to the test. Five main modalities of EMR, i.e., demographics, medication, past medical procedures, comorbidities, and laboratory results, were used as features for predictive modeling, both individually and fused together using late, middle, and early fusion. Models were evaluated in terms of precision, recall, F1-score (within 95% confidence interval). The early fusion model is the most effective predictor with 84% overall F1-score [CI 82.1-86.1]. The predictive performance of the model drops by 6 % when using recent clinical data while omitting the long-term medical history. Feature importance analysis indicates that history of cardiovascular disease, emergency room visits in the past year prior to testing, and demographic factors are predictive of the disease trajectory. We conclude that fusion modeling using medical history and current treatment data can forecast the need for hospitalization for patients infected with COVID-19 at the time of the RT-PCR test.
Project description:BackgroundThe objective of this study was to identify barriers to surrogate decision-maker application of patient values on life-sustaining treatments after stroke in Mexican American (MA) and non-Hispanic White (NHW) patients.MethodsWe conducted a qualitative analysis of semistructured interviews with stroke patient surrogate decision-makers completed approximately 6 months after hospitalization.ResultsForty-two family surrogate decision-makers participated (median age: 54.5 years; female: 83%; patients were MA [60%] and NHW [36%], and 50% were deceased at the time of the interview). We identified three primary barriers to surrogates' applications of patient values and preferences when making decisions on life-sustaining treatments: (1) a minority of surrogates had no prior discussion of what the patient would want in the event of a serious medical illness, (2) surrogates struggled to apply prior known values and preferences to the actual decisions made, and (3) surrogates felt guilt or burden, often even in the setting of some knowledge of patient values or preferences. The first two barriers were seen to a similar degree in MA and NHW participants, though guilt or burden was reported more commonly among MA (28%) than NHW (13%) participants. Maintaining patient independence (e.g., ability to live at home, avoid a nursing home, make their own decisions) was the most important priority for decision-making for both MA and NHW participants; however, MA participants were more likely to list spending time with family as an important priority (24% vs. 7%).ConclusionsStroke surrogate decision-makers may benefit from (1) continued efforts to make advance care planning more common and more relevant, (2) assistance in how to apply their knowledge of patient values to actual treatment decisions, and (3) psychosocial support to reduce emotional burden. Barriers to surrogate application of patient values were generally similar in MA and NHW participants, though the possibility of greater guilt or burden among MA surrogates warrants further investigation and confirmation.
Project description:Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4-7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models' overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.
Project description:BACKGROUND:Precision medicine requires a stratification of patients by disease presentation that is sufficiently informative to allow for selecting treatments on a per-patient basis. For many diseases, such as neurological disorders, this stratification problem translates into a complex problem of clustering multivariate and relatively short time series because (i) these diseases are multifactorial and not well described by single clinical outcome variables and (ii) disease progression needs to be monitored over time. Additionally, clinical data often additionally are hindered by the presence of many missing values, further complicating any clustering attempts. FINDINGS:The problem of clustering multivariate short time series with many missing values is generally not well addressed in the literature. In this work, we propose a deep learning-based method to address this issue, variational deep embedding with recurrence (VaDER). VaDER relies on a Gaussian mixture variational autoencoder framework, which is further extended to (i) model multivariate time series and (ii) directly deal with missing values. We validated VaDER by accurately recovering clusters from simulated and benchmark data with known ground truth clustering, while varying the degree of missingness. We then used VaDER to successfully stratify patients with Alzheimer disease and patients with Parkinson disease into subgroups characterized by clinically divergent disease progression profiles. Additional analyses demonstrated that these clinical differences reflected known underlying aspects of Alzheimer disease and Parkinson disease. CONCLUSIONS:We believe our results show that VaDER can be of great value for future efforts in patient stratification, and multivariate time-series clustering in general.
Project description:Rationale, aims and objectivesSupporting evidence for diagnostic test recommendations in clinical practice guidelines (CPGs) should not only include diagnostic accuracy, but also downstream consequences of the test result on patient-relevant outcomes. The aim of this study is to assess the extent to which evidence-based CPGs about diagnostic tests cover all relevant test-treatment pathway components.MethodsWe performed a systematic document analysis and quality assessment of publicly accessible CPGs about three common diagnostic tests: C-reactive protein, colonoscopy and fractional exhaled nitric oxide. Evaluation of the impact of the full test-treatment pathway (diagnostic accuracy, burden of the test, natural course of target condition, treatment effectiveness, and link between test result and administration of treatment) on patient relevant outcomes was considered best practice for developing medical test recommendations.ResultsWe retrieved 15 recommendations in 15 CPGs. The methodological quality of the CPGs varied from poor to excellent. Ten recommendations considered diagnostic accuracy. Four of these were funded on a systematic review and rating of the certainty in the evidence. None of the CPGs evaluated all steps of the test-treatment pathway. Burden of the test was considered in three CPGs, but without systematically reviewing the evidence. Natural course was considered in two CPGs, without a systematic review of the evidence. In three recommendations, treatment effectiveness was considered, supported with a systematic review and rating of the certainty in the evidence in one CPG. The link between test result and treatment administration was not considered in any CPG.ConclusionsThe included CPGs hardly seem to consider evidence about test consequences on patient-relevant outcomes. This might be explained by reporting issues and challenging methodology. Future research is needed to investigate how to facilitate guideline developers in explicit reliable consideration of all steps of a test-treatment pathway when developing diagnostic test recommendations.
Project description:BackgroundHeterogeneity among patients' responses to treatment is prevalent in psychiatric disorders. Personalized medicine approaches-which involve parsing patients into subgroups better indicated for a particular treatment-could therefore improve patient outcomes and serve as a powerful tool in patient selection within clinical trials. Machine learning approaches can identify patient subgroups but are often not "explainable" due to the use of complex algorithms that do not mirror clinicians' natural decision-making processes.MethodsHere we combine two analytical approaches-Personalized Advantage Index and Bayesian Rule Lists-to identify paliperidone-indicated schizophrenia patients in a way that emphasizes model explainability. We apply these approaches retrospectively to randomized, placebo-controlled clinical trial data to identify a paliperidone-indicated subgroup of schizophrenia patients who demonstrate a larger treatment effect (outcome on treatment superior than on placebo) than that of the full randomized sample as assessed with Cohen's d. For this study, the outcome corresponded to a reduction in the Positive and Negative Syndrome Scale (PANSS) total score which measures positive (e.g., hallucinations, delusions), negative (e.g., blunted affect, emotional withdrawal), and general psychopathological (e.g., disturbance of volition, uncooperativeness) symptoms in schizophrenia.ResultsUsing our combined explainable AI approach to identify a subgroup more responsive to paliperidone than placebo, the treatment effect increased significantly over that of the full sample (p < 0.0001 for a one-sample t-test comparing the full sample Cohen's d = 0.82 and a generated distribution of subgroup Cohen's d's with mean d = 1.22, std d = 0.09). In addition, our modeling approach produces simple logical statements (if-then-else), termed a "rule list", to ease interpretability for clinicians. A majority of the rule lists generated from cross-validation found two general psychopathology symptoms, disturbance of volition and uncooperativeness, to predict membership in the paliperidone-indicated subgroup.ConclusionsThese results help to technically validate our explainable AI approach to patient selection for a clinical trial by identifying a subgroup with an improved treatment effect. With these data, the explainable rule lists also suggest that paliperidone may provide an improved therapeutic benefit for the treatment of schizophrenia patients with either of the symptoms of high disturbance of volition or high uncooperativeness.Trial registrationclincialtrials.gov identifier: NCT 00,083,668; prospectively registered May 28, 2004.
Project description:Chronic kidney disease (CKD) patients can benefit from personalized education on lifestyle and nutrition management strategies to enhance healthcare outcomes. The potential use of chatbots, introduced in 2022, as a tool for educating CKD patients has been explored. A set of 15 questions on lifestyle modification and nutrition, derived from a thorough review of three specific KDIGO guidelines, were developed and posed in various formats, including original, paraphrased with different adverbs, incomplete sentences, and misspellings. Four versions of AI were used to answer these questions: ChatGPT 3.5 (March and September 2023 versions), ChatGPT 4, and Bard AI. Additionally, 20 questions on lifestyle modification and nutrition were derived from the NKF KDOQI guidelines for nutrition in CKD (2020 Update) and answered by four versions of chatbots. Nephrologists reviewed all answers for accuracy. ChatGPT 3.5 produced largely accurate responses across the different question complexities, with occasional misleading statements from the March version. The September 2023 version frequently cited its last update as September 2021 and did not provide specific references, while the November 2023 version did not provide any misleading information. ChatGPT 4 presented answers similar to 3.5 but with improved reference citations, though not always directly relevant. Bard AI, while largely accurate with pictorial representation at times, occasionally produced misleading statements and had inconsistent reference quality, although an improvement was noted over time. Bing AI from November 2023 had short answers without detailed elaboration and sometimes just answered "YES". Chatbots demonstrate potential as personalized educational tools for CKD that utilize layman's terms, deliver timely and rapid responses in multiple languages, and offer a conversational pattern advantageous for patient engagement. Despite improvements observed from March to November 2023, some answers remained potentially misleading. ChatGPT 4 offers some advantages over 3.5, although the differences are limited. Collaboration between healthcare professionals and AI developers is essential to improve healthcare delivery and ensure the safe incorporation of chatbots into patient care.