Project description:Latent print examiners use their expertise to determine whether the information present in a comparison of two fingerprints (or palmprints) is sufficient to conclude that the prints were from the same source (individualization). When fingerprint evidence is presented in court, it is the examiner's determination--not an objective metric--that is presented. This study was designed to ascertain the factors that explain examiners' determinations of sufficiency for individualization. Volunteer latent print examiners (n = 170) were each assigned 22 pairs of latent and exemplar prints for examination, and annotated features, correspondence of features, and clarity. The 320 image pairs were selected specifically to control clarity and quantity of features. The predominant factor differentiating annotations associated with individualization and inconclusive determinations is the count of corresponding minutiae; other factors such as clarity provided minimal additional discriminative value. Examiners' counts of corresponding minutiae were strongly associated with their own determinations; however, due to substantial variation of both annotations and determinations among examiners, one examiner's annotation and determination on a given comparison is a relatively weak predictor of whether another examiner would individualize. The extensive variability in annotations also means that we must treat any individual examiner's minutia counts as interpretations of the (unknowable) information content of the prints: saying "the prints had N corresponding minutiae marked" is not the same as "the prints had N corresponding minutiae." More consistency in annotations, which could be achieved through standardization and training, should lead to process improvements and provide greater transparency in casework.
Project description:The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as "difficult" than for "easy" or "moderate" comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n = 4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases.
Project description:The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners' decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners' decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
Project description:Bayesian forecasting for dose individualization of prophylactic factor VIII replacement therapy using pharmacokinetic samples is challenged by large interindividual variability in the bleeding risk. A pharmacokinetic-repeated time-to-event model-based forecasting approach was developed to contrast the ability to predict the future occurrence of bleeds based on individual (i) pharmacokinetic, (ii) bleeding, and (iii) pharmacokinetic, bleeding and covariate information using observed data from the Long-Term Efficacy Open-Label Program in Severe Hemophilia A Disease (LEOPOLD) clinical trials (172 severe hemophilia A patients taking prophylactic treatment). The predictive performance assessed by the area under receiver operating characteristic (ROC) curves was 0.67 (95% confidence interval (CI), 0.65-0.69), 0.78 (95% CI, 0.76-0.80), and 0.79 (95% CI, 0.77-0.81) for patients ≥ 12 years when using pharmacokinetics, bleeds, and all data, respectively, suggesting that individual bleed information adds value to the optimization of prophylactic dosing regimens in severe hemophilia A. Further steps to optimize the proposed tool for factor VIII dose adaptation in the clinic are required.
Project description:There is an immense literature on detection of latent fingerprints (LFPs) with fluorescent nanomaterials because fluorescence is one of the most sensitive detection methods. Although many fluorescent probes have been developed for latent fingerprint detection, many challenges remain, including the low selectivity, complicated processing, high background, and toxicity of nanoparticles used to visualize LFPs. In this study, we demonstrate biocompatible, efficient, and low background LFP detection with poly(vinylpyrrolidone) (PVP) coated fluorescent nanodiamonds (FNDs). PVP-coated FND (FND@PVP) is biocompatible at the cellular level. They neither inhibit cellar proliferation nor induce cell death via apoptosis or other cell killing pathways. Moreover, they do not elicit an immune response in cells. PVP coating enhances the physical adhesion of FND to diverse substrates and in particular results in efficient binding of FND@PVP to fingerprint ridges due to the intrinsic amphiphilicity of PVP. Clear, well-defined ridge structures with first, second, and third-level of LFP details are revealed within minutes by FND@PVP. The combination of this binding specificity and the remarkable optical properties of FND@PVP permits the detection of LPFs with high contrast, efficiency, selectivity, sensitivity, and reduced background interference. Our results demonstrate that background-free imaging via multicolor emission and dual-modal imaging of FND@PVP nanoparticles have great potential for high-resolution imaging of LFPs.
Project description:As the weakest links in information security defense are the individuals in an organizations, it is important to understand their information security behaviors. In the current study, we tested whether the neural variability pattern could predict an individual's intention to engage in information security violations. Because cognitive neuroscience methods can provide a new perspective into psychological processes without common methodological biases or social desirability, we combined an adapted version of the information security paradigm (ISP) with functional magnetic resonance imaging (fMRI) technology. While completing an adapted ISP task, participants underwent an fMRI scan. We adopted a machine learning method to build a neural variability predictive model. Consistent with previous studies, we found that people were more likely to take actions under neutral conditions than in minor violation contexts and major violation contexts. Moreover, the neural variability predictive model, including nodes within the task control, default mode, visual, salience and attention networks, can predict information security violation intentions. These results illustrate the predictive value of neural variability for information security violations and provide a new perspective for combining ISP with the fMRI technique to explore a neural predictive model of information security violation intention.
Project description:Information transfer, measured by transfer entropy, is a key component of distributed computation. It is therefore important to understand the pattern of information transfer in order to unravel the distributed computational algorithms of a system. Since in many natural systems distributed computation is thought to rely on rhythmic processes a frequency resolved measure of information transfer is highly desirable. Here, we present a novel algorithm, and its efficient implementation, to identify separately frequencies sending and receiving information in a network. Our approach relies on the invertible maximum overlap discrete wavelet transform (MODWT) for the creation of surrogate data in the computation of transfer entropy and entirely avoids filtering of the original signals. The approach thereby avoids well-known problems due to phase shifts or the ineffectiveness of filtering in the information theoretic setting. We also show that measuring frequency-resolved information transfer is a partial information decomposition problem that cannot be fully resolved to date and discuss the implications of this issue. Last, we evaluate the performance of our algorithm on simulated data and apply it to human magnetoencephalography (MEG) recordings and to local field potential recordings in the ferret. In human MEG we demonstrate top-down information flow in temporal cortex from very high frequencies (above 100Hz) to both similarly high frequencies and to frequencies around 20Hz, i.e. a complex spectral configuration of cortical information transmission that has not been described before. In the ferret we show that the prefrontal cortex sends information at low frequencies (4-8 Hz) to early visual cortex (V1), while V1 receives the information at high frequencies (> 125 Hz).
Project description:Glottis segmentation is a crucial step to quantify endoscopic footage in laryngeal high-speed videoendoscopy. Recent advances in deep neural networks for glottis segmentation allow for a fully automatic workflow. However, exact knowledge of integral parts of these deep segmentation networks remains unknown, and understanding the inner workings is crucial for acceptance in clinical practice. Here, we show that a single latent channel as a bottleneck layer is sufficient for glottal area segmentation using systematic ablations. We further demonstrate that the latent space is an abstraction of the glottal area segmentation relying on three spatially defined pixel subtypes allowing for a transparent interpretation. We further provide evidence that the latent space is highly correlated with the glottal area waveform, can be encoded with four bits, and decoded using lean decoders while maintaining a high reconstruction accuracy. Our findings suggest that glottis segmentation is a task that can be highly optimized to gain very efficient and explainable deep neural networks, important for application in the clinic. In the future, we believe that online deep learning-assisted monitoring is a game-changer in laryngeal examinations.
Project description:Validity coefficients for multicomponent measuring instruments are known to be affected by measurement error that attenuates them, affects associated standard errors, and influences results of statistical tests with respect to population parameter values. To account for measurement error, a latent variable modeling approach is discussed that allows point and interval estimation of the relationship of an underlying latent factor to a criterion variable in a setting that is more general than the commonly considered homogeneous psychometric test case. The method is particularly helpful in validity studies for scales with a second-order factorial structure, by allowing evaluation of the relationship between the second-order factor and a criterion variable. The procedure is similarly useful in studies of discriminant, convergent, concurrent, and predictive validity of measuring instruments with complex latent structure, and is readily applicable when measuring interrelated traits that share a common variance source. The outlined approach is illustrated using data from an authoritarianism study.
Project description:Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies.