Project description:Accurately mapping brain structures in three-dimensions is critical for an in-depth understanding of brain functions. Using the brain atlas as a hub, mapping detected datasets into a standard brain space enables efficient use of various datasets. However, because of the heterogeneous and nonuniform brain structure characteristics at the cellular level introduced by recently developed high-resolution whole-brain microscopy techniques, it is difficult to apply a single standard to robust registration of various large-volume datasets. In this study, we propose a robust Brain Spatial Mapping Interface (BrainsMapi) to address the registration of large-volume datasets by introducing extracted anatomically invariant regional features and a large-volume data transformation method. By performing validation on model data and biological images, BrainsMapi achieves accurate registration on intramodal, individual, and multimodality datasets and can also complete the registration of large-volume datasets (approximately 20 TB) within 1 day. In addition, it can register and integrate unregistered vectorized datasets into a common brain space. BrainsMapi will facilitate the comparison, reuse and integration of a variety of brain datasets.
Project description:Establishing correspondences across brains for the purposes of comparison and group analysis is almost universally done by registering images to one another either directly or via a template. However, there are many registration algorithms to choose from. A recent evaluation of fully automated nonlinear deformation methods applied to brain image registration was restricted to volume-based methods. The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and brain-only (de-skulled) images. We used permutation tests to compare the overlap or Hausdorff distance performance for more than 16,000 registrations between 80 manually labeled brain images. We compared every combination of volume-based and surface-based labels, registration, and evaluation. Our primary findings are the following: 1. de-skulling aids volume registration methods; 2. custom-made optimal average templates improve registration over direct pairwise registration; and 3. resampling volume labels on surfaces or converting surface labels to volumes introduces distortions that preclude a fair comparison between the highest ranking volume and surface registration methods using present resampling methods. From the results of this study, we recommend constructing a custom template from a limited sample drawn from the same or a similar representative population, using the same algorithm used for registering brains to the template.
Project description:Registration of data to a common frame of reference is an essential step in the analysis and integration of diverse neuroscientific data. To this end, volumetric brain atlases enable histological datasets to be spatially registered and analyzed, yet accurate registration remains expertise-dependent and slow. In order to address this limitation, we have trained a neural network, DeepSlice, to register mouse brain histological images to the Allen Brain Common Coordinate Framework, retaining registration accuracy while improving speed by >1000 fold.
Project description:BACKGROUND AND OBJECTIVES:The construction of whole-body magnetic resonance (MR) imaging atlases allows to perform statistical analysis with applications in anomaly detection, longitudinal, and correlation studies. Atlas-based methods require a common coordinate system to which all the subjects are mapped through image registration. Optimisation of the reference space is an important aspect that affects the subsequent analysis of the registered data, and having a reference space that is neutral with respect to local tissue volume is valuable in correlation studies. The purpose of this work is to generate a reference space for whole-body imaging that has zero voxel-wise average volume change when mapped to a cohort. METHODS:This work proposes an approach to register multiple whole-body images to a common template using volume changes to generate a synthetic reference space, starting with an initial reference and refining it by warping it with a deformation that brings the voxel-wise average volume change associated to the mappings of all the images in the cohort to zero. RESULTS:Experiments on fat/water separated whole-body MR images show how the method effectively generates a reference space neutral with respect to volume changes, without reducing the quality of the registration nor introducing artefacts in the anatomy, while providing better alignment when compared to an implicit reference groupwise approach. CONCLUSIONS:The proposed method allows to quickly generate a reference space neutral with respect to local volume changes, that retains the registration quality of a sharp template, and that can be used for statistical analysis of voxel-wise correlations in large datasets of whole-body image data.
Project description:The extreme complexity of mammalian brains requires a comprehensive deconstruction of neuroanatomical structures. Scientists normally use a brain stereotactic atlas to determine the locations of neurons and neuronal circuits. However, different brain images are normally not naturally aligned even when they are imaged with the same setup, let alone under the differing resolutions and dataset sizes used in mesoscopic imaging. As a result, it is difficult to achieve high-throughput automatic registration without manual intervention. Here, we propose a deep learning-based registration method called DeepMapi to predict a deformation field used to register mesoscopic optical images to an atlas. We use a self-feedback strategy to address the problem of imbalanced training sets (sampling at a fixed step size in nonuniform brains of structures and deformations) and use a dual-hierarchical network to capture the large and small deformations. By comparing DeepMapi with other registration methods, we demonstrate its superiority over a set of ground truth images, including both optical and MRI images. DeepMapi achieves fully automatic registration of mesoscopic micro-optical images, even macroscopic MRI datasets, in minutes, with an accuracy comparable to those of manual annotations by anatomists.
Project description:Existing whole-brain models are generally tailored to the modelling of a particular data modality (e.g., fMRI or MEG/EEG). We propose that despite the differing aspects of neural activity each modality captures, they originate from shared network dynamics. Building on the universal principles of self-organising delay-coupled nonlinear systems, we aim to link distinct features of brain activity - captured across modalities - to the dynamics unfolding on a macroscopic structural connectome. To jointly predict connectivity, spatiotemporal and transient features of distinct signal modalities, we consider two large-scale models - the Stuart Landau and Wilson and Cowan models - which generate short-lived 40 Hz oscillations with varying levels of realism. To this end, we measure features of functional connectivity and metastable oscillatory modes (MOMs) in fMRI and MEG signals - and compare them against simulated data. We show that both models can represent MEG functional connectivity (FC), functional connectivity dynamics (FCD) and generate MOMs to a comparable degree. This is achieved by adjusting the global coupling and mean conduction time delay and, in the WC model, through the inclusion of balance between excitation and inhibition. For both models, the omission of delays dramatically decreased the performance. For fMRI, the SL model performed worse for FCD and MOMs, highlighting the importance of balanced dynamics for the emergence of spatiotemporal and transient patterns of ultra-slow dynamics. Notably, optimal working points varied across modalities and no model was able to achieve a correlation with empirical FC higher than 0.4 across modalities for the same set of parameters. Nonetheless, both displayed the emergence of FC patterns that extended beyond the constraints of the anatomical structure. Finally, we show that both models can generate MOMs with empirical-like properties such as size (number of brain regions engaging in a mode) and duration (continuous time interval during which a mode appears). Our results demonstrate the emergence of static and dynamic properties of neural activity at different timescales from networks of delay-coupled oscillators at 40 Hz. Given the higher dependence of simulated FC on the underlying structural connectivity, we suggest that mesoscale heterogeneities in neural circuitry may be critical for the emergence of parallel cross-modal functional networks and should be accounted for in future modelling endeavours.
Project description:Imprecise registration between positron emission tomography (PET) and anatomical magnetic resonance (MR) images is a critical source of error in MR imaging-guided partial volume correction (MR-PVC). Here, we propose a novel framework for image registration and partial volume correction, which we term PVC-optimized registration (PoR), to address imprecise registration. The PoR framework iterates PVC and registration between uncorrected PET and smoothed PV-corrected images to obtain precise registration. We applied PoR to the [11C]PiB PET data of 92 participants obtained from the Alzheimer's Disease Neuroimaging Initiative database and compared the registration results, PV-corrected standardized uptake value (SUV) and its ratio to the cerebellum (SUVR), and intra-region coefficient of variation (CoV) between PoR and conventional registration. Significant differences in registration of as much as 2.74 mm and 3.02° were observed between the two methods (effect size < - 0.8 or > 0.8), which resulted in considerable SUVR differences throughout the brain, reaching a maximal difference of 62.3% in the sensory motor cortex. Intra-region CoV was significantly reduced using the PoR throughout the brain. These results suggest that PoR reduces error as a result of imprecise registration in PVC and is a useful method for accurately quantifying the amyloid burden in PET.
Project description:Optical imaging is a common technique in ocean research. Diving robots, towed cameras, drop-cameras and TV-guided sampling gear: all produce image data of the underwater environment. Technological advances like 4K cameras, autonomous robots, high-capacity batteries and LED lighting now allow systematic optical monitoring at large spatial scale and shorter time but with increased data volume and velocity. Volume and velocity are further increased by growing fleets and emerging swarms of autonomous vehicles creating big data sets in parallel. This generates a need for automated data processing to harvest maximum information. Systematic data analysis benefits from calibrated, geo-referenced data with clear metadata description, particularly for machine vision and machine learning. Hence, the expensive data acquisition must be documented, data should be curated as soon as possible, backed up and made publicly available. Here, we present a workflow towards sustainable marine image analysis. We describe guidelines for data acquisition, curation and management and apply it to the use case of a multi-terabyte deep-sea data set acquired by an autonomous underwater vehicle.
Project description:Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98% of all landmarks and their mean landmark registration accuracy (TRE) was 0.44% of the image diagonal. The challenge remains open to submissions and all images are available for download.
Project description:Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.