Project description:The decision to lie to another person involves a conflict between one's own and others' interest. Political ideology may foster self-promoting or self-transcending values and thus may balance or fuel self vs. other related conflicts. Here, we explored in politically non-aligned participants whether oculomotor behavior may index the influence on moral decision-making of prime stimuli related to left and right-wing ideologies. We presented pictures of Italian politicians and ideological words in a paradigm where participants could lie to opponents with high vs. low socio-economic status to obtain a monetary reward. Results show that left-wing words decreased self-gain lies and increased other-gain ones. Oculomotor behavior revealed that gazing longer at politicians' pictures led participants to look longer at opponent's status-related information than at game's outcome-related information before the decision. This, in turn, caused participants to lie less to low status opponents. Moreover, after lying, participants averted their gaze from high status opponents and maintained it towards low status ones. Our results offer novel evidence that ideological priming influences moral decision-making and suggest that oculomotor behavior may provide crucial insights on how this process takes place.
Project description:BackgroundRandomized controlled trials (RCT) in mental disorders research commonly use active control groups including psychotherapeutic shams or inactive medication. This meta-analysis assessed whether placebo conditions (active controls) had an effect compared to no treatment or usual care (passive controls).MethodsPubMed, Scopus, PsycINFO, PsycARTICLES, Ovid, the Cochrane Central Register of Controlled Trials and Web of Science were searched from inception to April 2021 and reference lists of relevant articles. Three-arm RCTs, including active and passive control groups, were selected. Where individual standardized mean difference (SMD) was calculable, random effects meta-analyses were performed to estimate an overall effect size with 95% confidence intervals (CI) comparing active vs passive controls. Heterogeneity was assessed using I² statistic and meta-regression. Funnel asymmetry was evaluated using Egger's test (Prospero registration: CRD42021242940).Results24 articles with 25 relevant RCTs were included in the review, of which 11 studies were of high risk of bias. There was an improvement in outcomes favouring the placebo conditions, compared to passive controls, overall (25 studies, SMD 0.24, 95% CI 0.06-0.42, I² = 43%) and in subgroups with anxiety (SMD 0.45, 95% CI 0.07-0.84, I² = 59%) or depression (SMD 0.22, 95% CI 0.04-0.39, I² = 0%). Meta-regression did not show a significant explanation for heterogeneity. Egger's test showed no asymmetry (p = .200).ConclusionsA small placebo effect was observed in mental disorders research overall, and in patients with anxiety or depression. These findings should be interpreted with caution in the light of heterogeneity and risk of bias.
Project description:In this proof-of-concept study, we tested whether placebo effects can be monitored and predicted by plasma proteins. In a randomized controlled design, 90 participants were exposed to a nauseating stimulus on two separate days and were randomly allocated to placebo treatment or no treatment on the second day. Significant placebo effects on nausea, motion sickness, and gastric activity could be verified. Using state-of-the-art proteomics, 74 differentially regulated proteins were identified as correlates of the placebo effect. Gene ontology (GO) enrichment analyses identified acute-phase proteins and microinflammatory proteins to be involved, and the identified GO signatures predicted day-adjusted scores of nausea indices in the placebo group. We also performed GO enrichment analyses of specific plasma proteins predictable by the experimental factors or their interactions and identified ‘grooming behavior’ as a prominent hit. Finally, Receiver Operator Characteristics allowed to identify plasma proteins differentiating placebo responders from non-responders, comprising immunoglobulins and proteins involved in oxidation reduction processes and complement activation. Plasma proteomics are a promising tool to identify molecular correlates and predictors of the placebo effect in humans.
Project description:BackgroundContextual effects (i.e., placebo response) refer to all health changes resulting from administering an apparently inactive treatment. In a randomized clinical trial (RCT), the overall treatment effect (i.e., the post-treatment effect in the intervention group) can be regarded as the true effect of the intervention plus the impact of contextual effects. This meta-research was conducted to examine the average proportion of the overall treatment effect attributable to contextual effects in RCTs across clinical conditions and treatments and explore whether it varies with trial contextual factors.MethodsData was extracted from trials included in the main meta-analysis from the latest update of the Cochrane review on "Placebo interventions for all clinical conditions" (searched from 1966 to March 2008). Only RCTs reported in English having an experimental intervention group, a placebo comparator group, and a no-treatment control group were eligible.ResultsIn total, 186 trials (16,655 patients) were included. On average, 54% (0.54, 95%CI 0.46 to 0.64) of the overall treatment effect was attributable to contextual effects. The contextual effects were higher for trials with blinded outcome assessor and concealed allocation. The contextual effects appeared to increase proportional to the placebo effect, lower mean age, and proportion of females.ConclusionApproximately half of the overall treatment effect in RCTs seems attributable to contextual effects rather than to the specific effect of treatments. As the study did not include all important contextual factors (e.g., patient-provider interaction), the true proportion of contextual effects could differ from the study's results. However, contextual effects should be considered when assessing treatment effects in clinical practice.Trial registrationPROSPERO CRD42019130257 . Registered on April 19, 2019.
Project description:Previous deception research on repeated interviews found that liars are not less consistent than truth tellers, presumably because liars use a "repeat strategy" to be consistent across interviews. The goal of this study was to design an interview procedure to overcome this strategy. Innocent participants (truth tellers) and guilty participants (liars) had to convince an interviewer that they had performed several innocent activities rather than committing a mock crime. The interview focused on the innocent activities (alibi), contained specific central and peripheral questions, and was repeated after 1 week without forewarning. Cognitive load was increased by asking participants to reply quickly. The liars' answers in replying to both central and peripheral questions were significantly less accurate, less consistent, and more evasive than the truth tellers' answers. Logistic regression analyses yielded classification rates ranging from around 70% (with consistency as the predictor variable), 85% (with evasive answers as the predictor variable), to over 90% (with an improved measure of consistency that incorporated evasive answers as the predictor variable, as well as with response accuracy as the predictor variable). These classification rates were higher than the interviewers' accuracy rate (54%).
Project description:Situations such as an entrepreneur overstating a project's value, or a superior choosing to under or overstate the gains from a project to a subordinate are common and may result in acts of deception. In this paper we modify the standard investment game in the economics literature to study the nature of deception. In this game a trustor (investor) can send a given amount of money to a trustee (or investee). The amount received is multiplied by a certain amount, k, and the investee then decides on how to divide the total amount received. In our modified game the information on the multiplier, k, is known only to the investee and she can send a non-binding message to the investor regarding its value. We find that 66% of the investees send false messages with both under and over, statement being observed. Investors are naive and almost half of them believe the message received. We find greater lying when the distribution of the multiplier is unknown by the investors than when they know the distribution. Further, messages make beliefs about the multiplier more pessimistic when the investors know the distribution of the multiplier, while the opposite is true when they do not know the distribution.
Project description:Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good-public goods games (PGGs)-with ideas from interpersonal deception theory (Buller and Burgoon 1996 Commun. Theory 6, 203-242. (doi:10.1111/j.1468-2885.1996.tb00127.x)) and truth-default theory (Levine 2014 J. Lang. Soc. Psychol. 33, 378-392. (doi:10.1177/0261927X14535916); Levine 2019 Duped: truth-default theory and the social science of lying and deception. University of Alabama Press). This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralized case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of the Digital Commons (Greco and Floridi 2004 Ethics Inf. Technol. 6, 73-81. (doi:10.1007/s10676-004-2895-2)).
Project description:BackgroundEvaluating digital interventions using remote methods enables the recruitment of large numbers of participants relatively conveniently and cheaply compared with in-person methods. However, conducting research remotely based on participant self-report with little verification is open to automated "bots" and participant deception.ObjectiveThis paper uses a case study of a remotely conducted trial of an alcohol reduction app to highlight and discuss (1) the issues with participant deception affecting remote research trials with financial compensation; and (2) the importance of rigorous data management to detect and address these issues.MethodsWe recruited participants on the internet from July 2020 to March 2022 for a randomized controlled trial (n=5602) evaluating the effectiveness of an alcohol reduction app, Drink Less. Follow-up occurred at 3 time points, with financial compensation offered (up to £36 [US $39.23]). Address authentication and telephone verification were used to detect 2 kinds of deception: "bots," that is, automated responses generated in clusters; and manual participant deception, that is, participants providing false information.ResultsOf the 1142 participants who enrolled in the first 2 months of recruitment, 75.6% (n=863) of them were identified as bots during data screening. As a result, a CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) was added, and after this, no more bots were identified. Manual participant deception occurred throughout the study. Of the 5956 participants (excluding bots) who enrolled in the study, 298 (5%) were identified as false participants. The extent of this decreased from 110 in November 2020, to a negligible level by February 2022 including a number of months with 0. The decline occurred after we added further screening questions such as attention checks, removed the prominence of financial compensation from social media advertising, and added an additional requirement to provide a mobile phone number for identity verification.ConclusionsData management protocols are necessary to detect automated bots and manual participant deception in remotely conducted trials. Bots and manual deception can be minimized by adding a CAPTCHA, attention checks, a requirement to provide a phone number for identity verification, and not prominently advertising financial compensation on social media.Trial registrationISRCTN Number ISRCTN64052601; https://doi.org/10.1186/ISRCTN64052601.