Project description:Monetary and fiscal authorities reacted swiftly to the COVID-19 pandemic by purchasing assets (or "Wall Street QE") and lending directly to non-financial firms (or "Main Street Lending"). Our paper develops a new framework to compare and contrast these different policies. For the Great Recession, characterized by impaired balance sheets of financial intermediaries, Main Street Lending and Wall Street QE are perfect substitutes and both stimulate aggregate demand. In contrast, for the COVID-19 recession, where non-financial firms faced significant cash flow shortages, Wall Street QE is almost completely ineffective, whereas Main Street Lending can be highly stimulative.
Project description:Previous research has shown that multitasking can have a positive or a negative influence on driving performance. The aim of this study was to determine how the interaction between driving circumstances and cognitive requirements of secondary tasks affect a driver's ability to control a car. We created a driving simulator paradigm where participants had to perform one of two scenarios: one with no traffic in the driver's lane, and one with substantial traffic in both lanes, some of which had to be overtaken. Four different secondary task conditions were combined with these driving scenarios. In both driving scenarios, using a tablet resulted in the worst, most dangerous, performance, while passively listening to the radio or answering questions for a radio quiz led to the best driving performance. Interestingly, driving as a single task did not produce better performance than driving in combination with one of the radio tasks, and even tended to be slightly worse. These results suggest that drivers switch to internally focused secondary tasks when nothing else is available during monotonous or repetitive driving environments. This mind wandering potentially has a stronger interference effect with driving than non-visual secondary tasks.
Project description:Recent research has explored the possibility of building attitudinal resistance against online misinformation through psychological inoculation. The inoculation metaphor relies on a medical analogy: by pre-emptively exposing people to weakened doses of misinformation cognitive immunity can be conferred. A recent example is the Bad News game, an online fake news game in which players learn about six common misinformation techniques. We present a replication and extension into the effectiveness of Bad News as an anti-misinformation intervention. We address three shortcomings identified in the original study: the lack of a control group, the relatively low number of test items, and the absence of attitudinal certainty measurements. Using a 2 (treatment vs. control) × 2 (pre vs. post) mixed design (N = 196) we measure participants' ability to spot misinformation techniques in 18 fake headlines before and after playing Bad News. We find that playing Bad News significantly improves people's ability to spot misinformation techniques compared to a gamified control group, and crucially, also increases people's level of confidence in their own judgments. Importantly, this confidence boost only occurred for those who updated their reliability assessments in the correct direction. This study offers further evidence for the effectiveness of psychological inoculation against not only specific instances of fake news, but the very strategies used in its production. Implications are discussed for inoculation theory and cognitive science research on fake news.
Project description:BackgroundThe main goal of the whole transcriptome analysis is to correctly identify all expressed transcripts within a specific cell/tissue--at a particular stage and condition--to determine their structures and to measure their abundances. RNA-seq data promise to allow identification and quantification of transcriptome at unprecedented level of resolution, accuracy and low cost. Several computational methods have been proposed to achieve such purposes. However, it is still not clear which promises are already met and which challenges are still open and require further methodological developments.ResultsWe carried out a simulation study to assess the performance of 5 widely used tools, such as: CEM, Cufflinks, iReckon, RSEM, and SLIDE. All of them have been used with default parameters. In particular, we considered the effect of the following three different scenarios: the availability of complete annotation, incomplete annotation, and no annotation at all. Moreover, comparisons were carried out using the methods in three different modes of action. In the first mode, the methods were forced to only deal with those isoforms that are present in the annotation; in the second mode, they were allowed to detect novel isoforms using the annotation as guide; in the third mode, they were operating in fully data driven way (although with the support of the alignment on the reference genome). In the latter modality, precision and recall are quite poor. On the contrary, results are better with the support of the annotation, even though it is not complete. Finally, abundance estimation error often shows a very skewed distribution. The performance strongly depends on the true real abundance of the isoforms. Lowly (and sometimes also moderately) expressed isoforms are poorly detected and estimated. In particular, lowly expressed isoforms are identified mainly if they are provided in the original annotation as potential isoforms.ConclusionsBoth detection and quantification of all isoforms from RNA-seq data are still hard problems and they are affected by many factors. Overall, the performance significantly changes since it depends on the modes of action and on the type of available annotation. Results obtained using complete or partial annotation are able to detect most of the expressed isoforms, even though the number of false positives is often high. Fully data driven approaches require more attention, at least for complex eucaryotic genomes. Improvements are desirable especially for isoform quantification and for isoform detection with low abundance.
Project description:Facial expressions carry key information about an individual's emotional state. Research into the perception of facial emotions typically employs static images of a small number of artificially posed expressions taken under tightly controlled experimental conditions. However, such approaches risk missing potentially important facial signals and within-person variability in expressions. The extent to which patterns of emotional variance in such images resemble more natural ambient facial expressions remains unclear. Here we advance a novel protocol for eliciting natural expressions from dynamic faces, using a dimension of emotional valence as a test case. Subjects were video recorded while delivering either positive or negative news to camera, but were not instructed to deliberately or artificially pose any specific expressions or actions. A PCA-based active appearance model was used to capture the key dimensions of facial variance across frames. Linear discriminant analysis distinguished facial change determined by the emotional valence of the message, and this also generalised across subjects. By sampling along the discriminant dimension, and back-projecting into the image space, we extracted a behaviourally interpretable dimension of emotional valence. This dimension highlighted changes commonly represented in traditional face stimuli such as variation in the internal features of the face, but also key postural changes that would typically be controlled away such as a dipping versus raising of the head posture from negative to positive valences. These results highlight the importance of natural patterns of facial behaviour in emotional expressions, and demonstrate the efficacy of using data-driven approaches to study the representation of these cues by the perceptual system. The protocol and model described here could be readily extended to other emotional and non-emotional dimensions of facial variance.
Project description:In our experiment, we tested how exposure to a mock televised news segment, with a systematically manipulated emotional valence of voiceover, images and TV tickers (in the updating format) impacts viewers' perception. Subjects (N = 603) watched specially prepared professional video material which portrayed the story of a candidate for local mayor. Following exposure to the video, subjects assessed the politician in terms of competence, sociability, and morality. Results showed that positive images improved the assessment of the politician, whereas negative images lowered it. In addition, unexpectedly, positive tickers led to a negative assessment, and negative ones led to more beneficial assessments. However, in a situation of inconsistency between the voiceover and information provided on visual add-ons, additional elements are apparently ignored, especially when they are negative and the narrative is positive. We then discuss the implications of these findings.
Project description:Basol et al. (2020) tested the "the Bad News Game" (BNG), an app designed to improve ability to spot false claims on social media. Participants rated simulated Tweets, then played either the BNG or an unrelated game, then re-rated the Tweets. Playing the BNG lowered rated belief in false Tweets. Here, four teams of undergraduate psychology students each attempted an extended replication of Basol et al., using updated versions of the original Bad News game. The most important extension was that the replications included a larger number of true Tweets than the original study and planned analyses of responses to true Tweets. The four replications were loosely coordinated, with each team independently working out how to implement the agreed plan. Despite many departures from the Basol et al. method, all four teams replicated their key finding: Playing the BNG reduced belief in false Tweets. But playing the BNG also reduced belief in true Tweets to the same or almost the same extent. Exploratory signal detection theory analyses indicated that the BNG increased response bias but did not improve discrimination. This converges with findings reported by Modirrousta-Galian and Higham (2023).
Project description:BackgroundThe health care industry has more insider breaches than any other industry. Soon-to-be graduates are the trusted insiders of tomorrow, and their knowledge can be used to compromise organizational security systems.ObjectiveThe objective of this paper was to identify the role that monetary incentives play in violating the Health Insurance Portability and Accountability Act's (HIPAA) regulations and privacy laws by the next generation of employees. The research model was developed using the economics of crime literature and rational choice theory. The primary research question was whether higher perceptions of being apprehended for violating HIPAA regulations were related to higher requirements for monetary incentives.MethodsFive scenarios were developed to determine if monetary incentives could be used to influence subjects to illegally obtain health care information and to release that information to individuals and media outlets. The subjects were also asked about the probability of getting caught for violating HIPAA laws. Correlation analysis was used to determine whether higher perceptions of being apprehended for violating HIPAA regulations were related to higher requirements for monetary incentives.ResultsMany of the subjects believed there was a high probability of being caught. Nevertheless, many of them could be incentivized to violate HIPAA laws. In the nursing scenario, 45.9% (240/523) of the participants indicated that there is a price, ranging from US $1000 to over US $10 million, that is acceptable for violating HIPAA laws. In the doctors' scenario, 35.4% (185/523) of the participants indicated that there is a price, ranging from US $1000 to over US $10 million, for violating HIPAA laws. In the insurance agent scenario, 45.1% (236/523) of the participants indicated that there is a price, ranging from US $1000 to over US $10 million, for violating HIPAA laws. When a personal context is involved, the percentages substantially increase. In the scenario where an experimental treatment for the subject's mother is needed, which is not covered by insurance, 78.4% (410/523) of the participants would accept US $100,000 from a media outlet for the medical records of a politician. In the scenario where US $50,000 is needed to obtain medical records about a famous reality star to help a friend in need of emergency medical transportation, 64.6% (338/523) of the participants would accept the money.ConclusionsA key finding of this study is that individuals perceiving a high probability of being caught are less likely to release private information. However, when the personal context involves a friend or family member, such as a mother, they will probably succumb to the incentive, regardless of the probability of being caught. The key to reducing noncompliance will be to implement organizational procedures and constantly monitor and develop educational and training programs to encourage HIPAA compliance.
Project description:PurposeTo present a novel method for meta-analysis of the fractionation sensitivity of tumors as applied to prostate cancer in the presence of an overall time factor.Methods and materialsA systematic search for radiation dose-fractionation trials in prostate cancer was performed using PubMed and by manual search. Published trials comparing standard fractionated external beam radiation therapy with alternative fractionation were eligible. For each trial the α/β ratio and its 95% confidence interval (CI) were extracted, and the data were synthesized with each study weighted by the inverse variance. An overall time factor was included in the analysis, and its influence on α/β was investigated.ResultsFive studies involving 1965 patients were included in the meta-analysis of α/β. The synthesized α/β assuming no effect of overall treatment time was -0.07 Gy (95% CI -0.73-0.59), which was increased to 0.47 Gy (95% CI -0.55-1.50) if a single highly weighted study was excluded. In a separate analysis, 2 studies based on 10,808 patients in total allowed extraction of a synthesized estimate of a time factor of 0.31 Gy/d (95% CI 0.20-0.42). The time factor increased the α/β estimate to 0.58 Gy (95% CI -0.53-1.69)/1.93 Gy (95% CI -0.27-4.14) with/without the heavily weighted study. An analysis of the uncertainty of the α/β estimate showed a loss of information when the hypofractionated arm was underdosed compared with the normo-fractionated arm.ConclusionsThe current external beam fractionation studies are consistent with a very low α/β ratio for prostate cancer, although the CIs include α/β ratios up to 4.14 Gy in the presence of a time factor. Details of the dose fractionation in the 2 trial arms have critical influence on the information that can be extracted from a study. Studies with unfortunate designs will supply little or no information about α/β regardless of the number of subjects enrolled.