Project description:Accurately mapping brain structures in three-dimensions is critical for an in-depth understanding of brain functions. Using the brain atlas as a hub, mapping detected datasets into a standard brain space enables efficient use of various datasets. However, because of the heterogeneous and nonuniform brain structure characteristics at the cellular level introduced by recently developed high-resolution whole-brain microscopy techniques, it is difficult to apply a single standard to robust registration of various large-volume datasets. In this study, we propose a robust Brain Spatial Mapping Interface (BrainsMapi) to address the registration of large-volume datasets by introducing extracted anatomically invariant regional features and a large-volume data transformation method. By performing validation on model data and biological images, BrainsMapi achieves accurate registration on intramodal, individual, and multimodality datasets and can also complete the registration of large-volume datasets (approximately 20 TB) within 1 day. In addition, it can register and integrate unregistered vectorized datasets into a common brain space. BrainsMapi will facilitate the comparison, reuse and integration of a variety of brain datasets.
Project description:Establishing correspondences across brains for the purposes of comparison and group analysis is almost universally done by registering images to one another either directly or via a template. However, there are many registration algorithms to choose from. A recent evaluation of fully automated nonlinear deformation methods applied to brain image registration was restricted to volume-based methods. The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and brain-only (de-skulled) images. We used permutation tests to compare the overlap or Hausdorff distance performance for more than 16,000 registrations between 80 manually labeled brain images. We compared every combination of volume-based and surface-based labels, registration, and evaluation. Our primary findings are the following: 1. de-skulling aids volume registration methods; 2. custom-made optimal average templates improve registration over direct pairwise registration; and 3. resampling volume labels on surfaces or converting surface labels to volumes introduces distortions that preclude a fair comparison between the highest ranking volume and surface registration methods using present resampling methods. From the results of this study, we recommend constructing a custom template from a limited sample drawn from the same or a similar representative population, using the same algorithm used for registering brains to the template.
Project description:BACKGROUND AND OBJECTIVES:The construction of whole-body magnetic resonance (MR) imaging atlases allows to perform statistical analysis with applications in anomaly detection, longitudinal, and correlation studies. Atlas-based methods require a common coordinate system to which all the subjects are mapped through image registration. Optimisation of the reference space is an important aspect that affects the subsequent analysis of the registered data, and having a reference space that is neutral with respect to local tissue volume is valuable in correlation studies. The purpose of this work is to generate a reference space for whole-body imaging that has zero voxel-wise average volume change when mapped to a cohort. METHODS:This work proposes an approach to register multiple whole-body images to a common template using volume changes to generate a synthetic reference space, starting with an initial reference and refining it by warping it with a deformation that brings the voxel-wise average volume change associated to the mappings of all the images in the cohort to zero. RESULTS:Experiments on fat/water separated whole-body MR images show how the method effectively generates a reference space neutral with respect to volume changes, without reducing the quality of the registration nor introducing artefacts in the anatomy, while providing better alignment when compared to an implicit reference groupwise approach. CONCLUSIONS:The proposed method allows to quickly generate a reference space neutral with respect to local volume changes, that retains the registration quality of a sharp template, and that can be used for statistical analysis of voxel-wise correlations in large datasets of whole-body image data.
Project description:The extreme complexity of mammalian brains requires a comprehensive deconstruction of neuroanatomical structures. Scientists normally use a brain stereotactic atlas to determine the locations of neurons and neuronal circuits. However, different brain images are normally not naturally aligned even when they are imaged with the same setup, let alone under the differing resolutions and dataset sizes used in mesoscopic imaging. As a result, it is difficult to achieve high-throughput automatic registration without manual intervention. Here, we propose a deep learning-based registration method called DeepMapi to predict a deformation field used to register mesoscopic optical images to an atlas. We use a self-feedback strategy to address the problem of imbalanced training sets (sampling at a fixed step size in nonuniform brains of structures and deformations) and use a dual-hierarchical network to capture the large and small deformations. By comparing DeepMapi with other registration methods, we demonstrate its superiority over a set of ground truth images, including both optical and MRI images. DeepMapi achieves fully automatic registration of mesoscopic micro-optical images, even macroscopic MRI datasets, in minutes, with an accuracy comparable to those of manual annotations by anatomists.
Project description:Imprecise registration between positron emission tomography (PET) and anatomical magnetic resonance (MR) images is a critical source of error in MR imaging-guided partial volume correction (MR-PVC). Here, we propose a novel framework for image registration and partial volume correction, which we term PVC-optimized registration (PoR), to address imprecise registration. The PoR framework iterates PVC and registration between uncorrected PET and smoothed PV-corrected images to obtain precise registration. We applied PoR to the [11C]PiB PET data of 92 participants obtained from the Alzheimer's Disease Neuroimaging Initiative database and compared the registration results, PV-corrected standardized uptake value (SUV) and its ratio to the cerebellum (SUVR), and intra-region coefficient of variation (CoV) between PoR and conventional registration. Significant differences in registration of as much as 2.74 mm and 3.02° were observed between the two methods (effect size < - 0.8 or > 0.8), which resulted in considerable SUVR differences throughout the brain, reaching a maximal difference of 62.3% in the sensory motor cortex. Intra-region CoV was significantly reduced using the PoR throughout the brain. These results suggest that PoR reduces error as a result of imprecise registration in PVC and is a useful method for accurately quantifying the amyloid burden in PET.
Project description:Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98% of all landmarks and their mean landmark registration accuracy (TRE) was 0.44% of the image diagonal. The challenge remains open to submissions and all images are available for download.
Project description:Optical imaging is a common technique in ocean research. Diving robots, towed cameras, drop-cameras and TV-guided sampling gear: all produce image data of the underwater environment. Technological advances like 4K cameras, autonomous robots, high-capacity batteries and LED lighting now allow systematic optical monitoring at large spatial scale and shorter time but with increased data volume and velocity. Volume and velocity are further increased by growing fleets and emerging swarms of autonomous vehicles creating big data sets in parallel. This generates a need for automated data processing to harvest maximum information. Systematic data analysis benefits from calibrated, geo-referenced data with clear metadata description, particularly for machine vision and machine learning. Hence, the expensive data acquisition must be documented, data should be curated as soon as possible, backed up and made publicly available. Here, we present a workflow towards sustainable marine image analysis. We describe guidelines for data acquisition, curation and management and apply it to the use case of a multi-terabyte deep-sea data set acquired by an autonomous underwater vehicle.
Project description:Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.
Project description:Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.
Project description:Despite extensive evidence of the possible interactions between multisensory signals, it remains unclear at what level of sensory processing these interactions take place. When two identical auditory beeps (inducers) are presented in quick succession accompanied by a single visual flash, observers often report seeing two visual flashes, rather than the physical one - the double flash illusion. This compelling illusion has often been considered to reflect direct interactions between neural activations in different primary sensory cortices. Against this simple account, here we show that by simply changing the inducer signals between featurally distinct signals (e.g. high- and low-pitch beeps) the illusory double flash is abolished. This result suggests that a critical component underlying the illusion is perceptual grouping of the inducer signals, consistent with the notion that multisensory combination is preceded by determination of whether the relevant signals share a common source of origin.