Project description:Highlights • Social media, fake news, and COVID-19.• Misinformation on social media has fuelled panic regarding the COVID-19.• Altruism is the strongest predictor of fake news sharing on COVID-19.• Socialization, information seeking and pass time predict fake news sharing.• Entertainment is not associated with sharing fake news on COVID-19. Fake news dissemination on COVID-19 has increased in recent months, and the factors that lead to the sharing of this misinformation is less well studied. Therefore, this paper describes the result of a Nigerian sample (n = 385) regarding the proliferation of fake news on COVID-19. The fake news phenomenon was studied using the Uses and Gratification framework, which was extended by an “altruism” motivation. The data were analysed with Partial Least Squares (PLS) to determine the effects of six variables on the outcome of fake news sharing. Our results showed that altruism was the most significant factor that predicted fake news sharing of COVID-19. We also found that social media users’ motivations for information sharing, socialisation, information seeking and pass time predicted the sharing of false information about COVID-19. In contrast, no significant association was found for entertainment motivation. We concluded with some theoretical and practical implications.
Project description:BackgroundProliferation of misinformation in digital news environments can harm society in a number of ways, but its dangers are most acute when citizens believe that false news is factually accurate. A recent wave of empirical research focuses on factors that explain why people fall for the so-called fake news. In this scoping review, we summarize the results of experimental studies that test different predictors of individuals' belief in misinformation.MethodsThe review is based on a synthetic analysis of 26 scholarly articles. The authors developed and applied a search protocol to two academic databases, Scopus and Web of Science. The sample included experimental studies that test factors influencing users' ability to recognize fake news, their likelihood to trust it or intention to engage with such content. Relying on scoping review methodology, the authors then collated and summarized the available evidence.ResultsThe study identifies three broad groups of factors contributing to individuals' belief in fake news. Firstly, message characteristics-such as belief consistency and presentation cues-can drive people's belief in misinformation. Secondly, susceptibility to fake news can be determined by individual factors including people's cognitive styles, predispositions, and differences in news and information literacy. Finally, accuracy-promoting interventions such as warnings or nudges priming individuals to think about information veracity can impact judgements about fake news credibility. Evidence suggests that inoculation-type interventions can be both scalable and effective. We note that study results could be partly driven by design choices such as selection of stimuli and outcome measurement.ConclusionsWe call for expanding the scope and diversifying designs of empirical investigations of people's susceptibility to false information online. We recommend examining digital platforms beyond Facebook, using more diverse formats of stimulus material and adding a comparative angle to fake news research.
Project description:Countering misinformation can reduce belief in the moment, but corrective messages quickly fade from memory. We tested whether the longer-term impact of fact-checks depends on when people receive them. In two experiments (total N = 2,683), participants read true and false headlines taken from social media. In the treatment conditions, "true" and "false" tags appeared before, during, or after participants read each headline. Participants in a control condition received no information about veracity. One week later, participants in all conditions rated the same headlines' accuracy. Providing fact-checks after headlines (debunking) improved subsequent truth discernment more than providing the same information during (labeling) or before (prebunking) exposure. This finding informs the cognitive science of belief revision and has practical implications for social media platform designers.
Project description:Fake news is a complex problem that leads to different approaches used to identify them. In our paper, we focus on identifying fake news using its content. The used dataset containing fake and real news was pre-processed using syntactic analysis. Dependency grammar methods were used for the sentences of the dataset and based on them the importance of each word within the sentence was determined. This information about the importance of words in sentences was utilized to create the input vectors for classifications. The paper aims to find out whether it is possible to use the dependency grammar to improve the classification of fake news. We compared these methods with the TfIdf method. The results show that it is possible to use the dependency grammar information with acceptable accuracy for the classification of fake news. An important finding is that the dependency grammar can improve existing techniques. We have improved the traditional TfIdf technique in our experiment.
Project description:Fake news detection is growing in importance as a key topic in the information age. However, most current methods rely on pre-trained small language models (SLMs), which face significant limitations in processing news content that requires specialized knowledge, thereby constraining the efficiency of fake news detection. To address these limitations, we propose the FND-LLM Framework, which effectively combines SLMs and LLMs to enhance their complementary strengths and explore the capabilities of LLMs in multimodal fake news detection. The FND-LLM framework integrates the textual feature branch, the visual semantic branch, the visual tampering branch, the co-attention network, the cross-modal feature branch and the large language model branch. The textual feature branch and visual semantic branch are responsible for extracting the textual and visual information of the news content, respectively, while the co-attention network is used to refine the interrelationship between the textual and visual information. The visual tampering branch is responsible for extracting news image tampering features. The cross-modal feature branch enhances inter-modal complementarity through the CLIP model, while the large language model branch utilizes the inference capability of LLMs to provide auxiliary explanation for the detection process. Our experimental results indicate that the FND-LLM framework outperforms existing models, achieving improvements of 0.7%, 6.8% and 1.3% improvements in overall accuracy on Weibo, Gossipcop, and Politifact, respectively.
Project description:Recent research has explored the possibility of building attitudinal resistance against online misinformation through psychological inoculation. The inoculation metaphor relies on a medical analogy: by pre-emptively exposing people to weakened doses of misinformation cognitive immunity can be conferred. A recent example is the Bad News game, an online fake news game in which players learn about six common misinformation techniques. We present a replication and extension into the effectiveness of Bad News as an anti-misinformation intervention. We address three shortcomings identified in the original study: the lack of a control group, the relatively low number of test items, and the absence of attitudinal certainty measurements. Using a 2 (treatment vs. control) × 2 (pre vs. post) mixed design (N = 196) we measure participants' ability to spot misinformation techniques in 18 fake headlines before and after playing Bad News. We find that playing Bad News significantly improves people's ability to spot misinformation techniques compared to a gamified control group, and crucially, also increases people's level of confidence in their own judgments. Importantly, this confidence boost only occurred for those who updated their reliability assessments in the correct direction. This study offers further evidence for the effectiveness of psychological inoculation against not only specific instances of fake news, but the very strategies used in its production. Implications are discussed for inoculation theory and cognitive science research on fake news.
Project description:Why do we share fake news? Despite a growing body of freely-available knowledge and information fake news has managed to spread more widely and deeply than before. This paper seeks to understand why this is the case. More specifically, using an experimental setting we aim to quantify the effect of veracity and perception on reaction likelihood. To examine the nature of this relationship, we set up an experiment that mimics the mechanics of Twitter, allowing us to observe the user perception, their reaction in the face of shown claims and the factual veracity of those claims. We find that perceived veracity significantly predicts how likely a user is to react, with higher perceived veracity leading to higher reaction rates. Additionally, we confirm that fake news is inherently more likely to be shared than other types of news. Lastly, we identify an activist-type behavior, meaning that belief in fake news is associated with significantly disproportionate spreading (compared to belief in true news).
Project description:Fake news detection has gained increasing importance among the research community due to the widespread diffusion of fake news through media platforms. Many dataset have been released in the last few years, aiming to assess the performance of fake news detection methods. In this survey, we systematically review twenty-seven popular datasets for fake news detection by providing insights into the characteristics of each dataset and comparative analysis among them. A fake news detection datasets characterization composed of eleven characteristics extracted from the surveyed datasets is provided, along with a set of requirements for comparing and building new datasets. Due to the ongoing interest in this research topic, the results of the analysis are valuable to many researchers to guide the selection or definition of suitable datasets for evaluating their fake news detection methods.
Project description:The present online intervention promoted family-based prosocial values-in terms of helping family members-among young adults to build resistance against fake news. This preregistered randomized controlled trial study is among the first psychological fake news interventions in Eastern Europe, where the free press is weak and state-sponsored misinformation runs riot in mainstream media. In this intervention, participants were endowed with an expert role and requested to write a letter to their digitally less competent relatives explaining six strategies that help fake news recognition. Compared to the active control group there was an immediate effect (d = 0.32) that persisted until the follow-up four weeks later (d = 0.22) on fake news accuracy ratings of the young, advice-giving participants. The intervention also reduced the bullshit receptivity of participants both immediately after the intervention and in the long run. The present work demonstrates the power of using relevant social bonds for motivating behavior change among Eastern European participants. Our prosocial approach with its robust grounding in human psychology might complement prior interventions in the fight against misinformation.
Project description:The spread of fake news on social media is a pressing issue. Here, we develop a mathematical model on social networks in which news sharing is modeled as a coordination game. We use this model to study the effect of adding designated individuals who sanction fake news sharers (representing, for example, correction of false claims or public shaming of those who share such claims). By simulating our model on synthetic square lattices and small-world networks, we demonstrate that social network structure allows fake news spreaders to form echo chambers and more than doubles fake news' resistance to distributed sanctioning efforts. We confirm our results are robust to a wide range of coordination and sanctioning payoff parameters as well as initial conditions. Using a Twitter network dataset, we show that sanctioners can help contain fake news when placed strategically. Furthermore, we analytically determine the conditions required for peer sanctioning to be effective, including prevalence and enforcement levels. Our findings have implications for developing mitigation strategies to control misinformation and preserve the integrity of public discourse.