Project description:Theories in favor of deliberative democracy are based on the premise that social information processing can improve group beliefs. While research on the "wisdom of crowds" has found that information exchange can increase belief accuracy on noncontroversial factual matters, theories of political polarization imply that groups will become more extreme-and less accurate-when beliefs are motivated by partisan political bias. A primary concern is that partisan biases are associated not only with more extreme beliefs, but also with a diminished response to social information. While bipartisan networks containing both Democrats and Republicans are expected to promote accurate belief formation, politically homogeneous networks are expected to amplify partisan bias and reduce belief accuracy. To test whether the wisdom of crowds is robust to partisan bias, we conducted two web-based experiments in which individuals answered factual questions known to elicit partisan bias before and after observing the estimates of peers in a politically homogeneous social network. In contrast to polarization theories, we found that social information exchange in homogeneous networks not only increased accuracy but also reduced polarization. Our results help generalize collective intelligence research to political domains.
Project description:Reconstructing gene regulatory networks from high-throughput data is a long-standing challenge. Through the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we performed a comprehensive blind assessment of over 30 network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae and in silico microarray data. We characterize the performance, data requirements and inherent biases of different inference approaches, and we provide guidelines for algorithm application and development. We observed that no single inference method performs optimally across all data sets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse data sets. We thereby constructed high-confidence networks for E. coli and S. aureus, each comprising ~1,700 transcriptional interactions at a precision of ~50%. We experimentally tested 53 previously unobserved regulatory interactions in E. coli, of which 23 (43%) were supported. Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks.
Project description:Identifying individuals who are influential in diffusing information, ideas or products in a population remains a challenging problem. Most extant work can be abstracted by a process in which researchers first decide which features describe an influencer and then identify them as the individuals with the highest values of these features. This makes the identification dependent on the relevance of the selected features and it still remains uncertain if triggering the identified influencers leads to a behavioral change in others. Furthermore, most work was developed for cross-sectional or time-aggregated datasets, where the time-evolution of influence processes cannot be observed. We show that mapping the influencer identification to a wisdom of crowds problem overcomes these limitations. We present a framework in which the individuals in a social group repeatedly evaluate the contribution of other members according to what they perceive as valuable and not according to predefined features. We propose a method to aggregate the behavioral reactions of the members of the social group into a collective judgment that considers the temporal variation of influence processes. Using data from three large news providers, we show that the members of the group surprisingly agree on who are the influential individuals. The aggregation method addresses different sources of heterogeneity encountered in social systems and leads to results that are easily interpretable and comparable within and across systems. The approach we propose is computationally scalable and can be applied to any social systems where behavioral reactions are observable.
Project description:Social networks continuously change as new ties are created and existing ones fade. It is widely acknowledged that our social embedding has a substantial impact on what information we receive and how we form beliefs and make decisions. However, most empirical studies on the role of social networks in collective intelligence have overlooked the dynamic nature of social networks and its role in fostering adaptive collective intelligence. Therefore, little is known about how groups of individuals dynamically modify their local connections and, accordingly, the topology of the network of interactions to respond to changing environmental conditions. In this paper, we address this question through a series of behavioral experiments and supporting simulations. Our results reveal that, in the presence of plasticity and feedback, social networks can adapt to biased and changing information environments and produce collective estimates that are more accurate than their best-performing member. To explain these results, we explore two mechanisms: 1) a global-adaptation mechanism where the structural connectivity of the network itself changes such that it amplifies the estimates of high-performing members within the group (i.e., the network "edges" encode the computation); and 2) a local-adaptation mechanism where accurate individuals are more resistant to social influence (i.e., adjustments to the attributes of the "node" in the network); therefore, their initial belief is disproportionately weighted in the collective estimate. Our findings substantiate the role of social-network plasticity and feedback as key adaptive mechanisms for refining individual and collective judgments.
Project description:Professional fact-checkers and fact-checking organizations provide a critical public service. Skeptics of modern media, however, often question the accuracy and objectivity of fact-checkers. The current study assessed agreement among two independent fact-checkers, The Washington Post and PolitiFact, regarding the false and misleading statements of then President Donald J. Trump. Differences in statement selection and deceptiveness scaling were investigated. The Washington Post checked PolitiFact fact-checks 77.4% of the time (22.6% selection disagreement). Moderate agreement was observed for deceptiveness scaling. Nearly complete agreement was observed for bottom-line attributed veracity. Additional cross-checking with other sources (Snopes, FactCheck.org), original sources, and with fact-checking for the first 100 days of President Joe Biden's administration were inconsistent with potential ideology effects. Our evidence suggests fact-checking is a difficult enterprise, there is considerable variability between fact-checkers in the raw number of statements that are checked, and finally, selection and scaling account for apparent discrepancies among fact-checkers.
Project description:Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation.
Project description:Today's media landscape affords people access to richer information than ever before, with many individuals opting to consume content through social channels rather than traditional news sources. Although people frequent social platforms for a variety of reasons, we understand little about the consequences of encountering new information in these contexts, particularly with respect to how content is scrutinized. This research tests how perceiving the presence of others (as on social media platforms) affects the way that individuals evaluate information-in particular, the extent to which they verify ambiguous claims. Eight experiments using incentivized real effort tasks found that people are less likely to fact-check statements when they feel that they are evaluating them in the presence of others compared with when they are evaluating them alone. Inducing vigilance immediately before evaluation increased fact-checking under social settings.
Project description:A longstanding problem in the social, biological, and computational sciences is to determine how groups of distributed individuals can form intelligent collective judgments. Since Galton's discovery of the "wisdom of crowds" [Galton F (1907) Nature 75:450-451], theories of collective intelligence have suggested that the accuracy of group judgments requires individuals to be either independent, with uncorrelated beliefs, or diverse, with negatively correlated beliefs [Page S (2008) The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies]. Previous experimental studies have supported this view by arguing that social influence undermines the wisdom of crowds. These results showed that individuals' estimates became more similar when subjects observed each other's beliefs, thereby reducing diversity without a corresponding increase in group accuracy [Lorenz J, Rauhut H, Schweitzer F, Helbing D (2011) Proc Natl Acad Sci USA 108:9020-9025]. By contrast, we show general network conditions under which social influence improves the accuracy of group estimates, even as individual beliefs become more similar. We present theoretical predictions and experimental results showing that, in decentralized communication networks, group estimates become reliably more accurate as a result of information exchange. We further show that the dynamics of group accuracy change with network structure. In centralized networks, where the influence of central individuals dominates the collective estimation process, group estimates become more likely to increase in error.
Project description:Decades of research on collective decision making has claimed that aggregated judgment of multiple individuals is more accurate than expert individual judgement. A longstanding problem in this regard has been to determine how decisions of individuals can be combined to form intelligent group decisions. Our study consisted of a random target detection task in natural scenes, where human subjects (18 subjects, 7 female) detected the presence or absence of a random target as indicated by the cue word displayed prior to stimulus display. Concurrently the neural activities (EEG signals) were recorded. A separate behavioural experiment was performed by different subjects (20 subjects, 11 female) on the same set of images to categorize the tasks according to their difficulty levels. We demonstrate that the weighted average of individual decision confidence/neural decision variables produces significantly better performance than the frequently used majority pooling algorithm. Further, the classification error rates from individual judgement were found to increase with increasing task difficulty. This error could be significantly reduced upon combining the individual decisions using group aggregation rules. Using statistical tests, we show that combining all available participants is unnecessary to achieve minimum classification error rate. We also try to explore if group aggregation benefits depend on the correlation between the individual judgements of the group and our results seem to suggest that reduced inter-subject correlation can improve collective decision making for a fixed difficulty level.
Project description:Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds.